00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3669 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3271 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.327 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.339 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.352 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.352 > git config core.sparsecheckout # timeout=10 00:00:04.364 > git read-tree -mu HEAD # timeout=10 00:00:04.381 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.404 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.404 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.498 [Pipeline] Start of Pipeline 00:00:04.512 [Pipeline] library 00:00:04.513 Loading library shm_lib@master 00:00:04.513 Library shm_lib@master is cached. Copying from home. 00:00:04.530 [Pipeline] node 00:00:04.537 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.538 [Pipeline] { 00:00:04.547 [Pipeline] catchError 00:00:04.548 [Pipeline] { 00:00:04.559 [Pipeline] wrap 00:00:04.567 [Pipeline] { 00:00:04.575 [Pipeline] stage 00:00:04.577 [Pipeline] { (Prologue) 00:00:04.595 [Pipeline] echo 00:00:04.596 Node: VM-host-SM16 00:00:04.602 [Pipeline] cleanWs 00:00:04.611 [WS-CLEANUP] Deleting project workspace... 00:00:04.611 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.617 [WS-CLEANUP] done 00:00:04.790 [Pipeline] setCustomBuildProperty 00:00:04.874 [Pipeline] httpRequest 00:00:04.902 [Pipeline] echo 00:00:04.904 Sorcerer 10.211.164.101 is alive 00:00:04.914 [Pipeline] httpRequest 00:00:04.918 HttpMethod: GET 00:00:04.919 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.919 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.920 Response Code: HTTP/1.1 200 OK 00:00:04.920 Success: Status code 200 is in the accepted range: 200,404 00:00:04.920 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.803 [Pipeline] sh 00:00:06.077 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.088 [Pipeline] httpRequest 00:00:06.118 [Pipeline] echo 00:00:06.120 Sorcerer 10.211.164.101 is alive 00:00:06.127 [Pipeline] httpRequest 00:00:06.130 HttpMethod: GET 00:00:06.131 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:06.131 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:06.145 Response Code: HTTP/1.1 200 OK 00:00:06.145 Success: Status code 200 is in the accepted range: 200,404 00:00:06.146 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:32.815 [Pipeline] sh 00:00:33.091 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:36.378 [Pipeline] sh 00:00:36.680 + git -C spdk log --oneline -n5 00:00:36.680 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:36.680 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:36.680 2d30d9f83 accel: introduce tasks in sequence limit 00:00:36.680 2728651ee accel: adjust task per ch define name 00:00:36.680 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:36.701 [Pipeline] withCredentials 00:00:36.711 > git --version # timeout=10 00:00:36.724 > git --version # 'git version 2.39.2' 00:00:36.738 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:36.741 [Pipeline] { 00:00:36.750 [Pipeline] retry 00:00:36.752 [Pipeline] { 00:00:36.773 [Pipeline] sh 00:00:37.051 + git ls-remote http://dpdk.org/git/dpdk main 00:00:39.585 [Pipeline] } 00:00:39.608 [Pipeline] // retry 00:00:39.613 [Pipeline] } 00:00:39.633 [Pipeline] // withCredentials 00:00:39.642 [Pipeline] httpRequest 00:00:39.660 [Pipeline] echo 00:00:39.662 Sorcerer 10.211.164.101 is alive 00:00:39.669 [Pipeline] httpRequest 00:00:39.673 HttpMethod: GET 00:00:39.674 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:39.674 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:39.690 Response Code: HTTP/1.1 200 OK 00:00:39.690 Success: Status code 200 is in the accepted range: 200,404 00:00:39.690 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:53.390 [Pipeline] sh 00:00:53.667 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:55.105 [Pipeline] sh 00:00:55.414 + git -C dpdk log --oneline -n5 00:00:55.414 fa8d2f7f28 version: 24.07-rc2 00:00:55.414 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:55.414 2227c0ed9a maintainers: update for Microsoft drivers 00:00:55.414 8385370337 maintainers: update for Arm 00:00:55.414 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:55.433 [Pipeline] writeFile 00:00:55.450 [Pipeline] sh 00:00:55.728 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.740 [Pipeline] sh 00:00:56.019 + cat autorun-spdk.conf 00:00:56.019 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.019 SPDK_TEST_NVME=1 00:00:56.019 SPDK_TEST_FTL=1 00:00:56.019 SPDK_TEST_ISAL=1 00:00:56.019 SPDK_RUN_ASAN=1 00:00:56.019 SPDK_RUN_UBSAN=1 00:00:56.019 SPDK_TEST_XNVME=1 00:00:56.019 SPDK_TEST_NVME_FDP=1 00:00:56.019 SPDK_TEST_NATIVE_DPDK=main 00:00:56.019 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:56.019 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.025 RUN_NIGHTLY=1 00:00:56.028 [Pipeline] } 00:00:56.046 [Pipeline] // stage 00:00:56.066 [Pipeline] stage 00:00:56.069 [Pipeline] { (Run VM) 00:00:56.084 [Pipeline] sh 00:00:56.363 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.364 + echo 'Start stage prepare_nvme.sh' 00:00:56.364 Start stage prepare_nvme.sh 00:00:56.364 + [[ -n 1 ]] 00:00:56.364 + disk_prefix=ex1 00:00:56.364 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:56.364 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:56.364 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:56.364 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.364 ++ SPDK_TEST_NVME=1 00:00:56.364 ++ SPDK_TEST_FTL=1 00:00:56.364 ++ SPDK_TEST_ISAL=1 00:00:56.364 ++ SPDK_RUN_ASAN=1 00:00:56.364 ++ SPDK_RUN_UBSAN=1 00:00:56.364 ++ SPDK_TEST_XNVME=1 00:00:56.364 ++ SPDK_TEST_NVME_FDP=1 00:00:56.364 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:56.364 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:56.364 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.364 ++ RUN_NIGHTLY=1 00:00:56.364 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:56.364 + nvme_files=() 00:00:56.364 + declare -A nvme_files 00:00:56.364 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.364 + nvme_files['nvme.img']=5G 00:00:56.364 + nvme_files['nvme-cmb.img']=5G 00:00:56.364 + nvme_files['nvme-multi0.img']=4G 00:00:56.364 + nvme_files['nvme-multi1.img']=4G 00:00:56.364 + nvme_files['nvme-multi2.img']=4G 00:00:56.364 + nvme_files['nvme-openstack.img']=8G 00:00:56.364 + nvme_files['nvme-zns.img']=5G 00:00:56.364 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.364 + (( SPDK_TEST_FTL == 1 )) 00:00:56.364 + nvme_files["nvme-ftl.img"]=6G 00:00:56.364 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.364 + nvme_files["nvme-fdp.img"]=1G 00:00:56.364 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.364 + for nvme in "${!nvme_files[@]}" 00:00:56.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:56.364 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.364 + for nvme in "${!nvme_files[@]}" 00:00:56.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:56.364 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:56.364 + for nvme in "${!nvme_files[@]}" 00:00:56.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:56.364 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.364 + for nvme in "${!nvme_files[@]}" 00:00:56.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:56.364 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.364 + for nvme in "${!nvme_files[@]}" 00:00:56.364 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:56.661 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.661 + for nvme in "${!nvme_files[@]}" 00:00:56.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:56.661 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.661 + for nvme in "${!nvme_files[@]}" 00:00:56.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:56.661 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.661 + for nvme in "${!nvme_files[@]}" 00:00:56.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:56.661 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:56.661 + for nvme in "${!nvme_files[@]}" 00:00:56.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:56.661 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.661 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:56.661 + echo 'End stage prepare_nvme.sh' 00:00:56.661 End stage prepare_nvme.sh 00:00:56.673 [Pipeline] sh 00:00:56.952 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.952 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:56.952 00:00:56.952 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:56.952 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:56.952 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:56.952 HELP=0 00:00:56.952 DRY_RUN=0 00:00:56.952 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:56.952 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:56.952 NVME_AUTO_CREATE=0 00:00:56.952 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:56.952 NVME_CMB=,,,, 00:00:56.952 NVME_PMR=,,,, 00:00:56.952 NVME_ZNS=,,,, 00:00:56.952 NVME_MS=true,,,, 00:00:56.952 NVME_FDP=,,,on, 00:00:56.952 SPDK_VAGRANT_DISTRO=fedora38 00:00:56.952 SPDK_VAGRANT_VMCPU=10 00:00:56.952 SPDK_VAGRANT_VMRAM=12288 00:00:56.952 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.952 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.952 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.952 SPDK_OPENSTACK_NETWORK=0 00:00:56.952 VAGRANT_PACKAGE_BOX=0 00:00:56.952 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.952 FORCE_DISTRO=true 00:00:56.952 VAGRANT_BOX_VERSION= 00:00:56.952 EXTRA_VAGRANTFILES= 00:00:56.952 NIC_MODEL=e1000 00:00:56.952 00:00:56.952 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:56.952 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:00.261 Bringing machine 'default' up with 'libvirt' provider... 00:01:00.519 ==> default: Creating image (snapshot of base box volume). 00:01:00.777 ==> default: Creating domain with the following settings... 00:01:00.777 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721063231_675df1cf2d4518163eed 00:01:00.777 ==> default: -- Domain type: kvm 00:01:00.777 ==> default: -- Cpus: 10 00:01:00.777 ==> default: -- Feature: acpi 00:01:00.777 ==> default: -- Feature: apic 00:01:00.777 ==> default: -- Feature: pae 00:01:00.777 ==> default: -- Memory: 12288M 00:01:00.777 ==> default: -- Memory Backing: hugepages: 00:01:00.777 ==> default: -- Management MAC: 00:01:00.777 ==> default: -- Loader: 00:01:00.777 ==> default: -- Nvram: 00:01:00.777 ==> default: -- Base box: spdk/fedora38 00:01:00.777 ==> default: -- Storage pool: default 00:01:00.777 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721063231_675df1cf2d4518163eed.img (20G) 00:01:00.777 ==> default: -- Volume Cache: default 00:01:00.777 ==> default: -- Kernel: 00:01:00.777 ==> default: -- Initrd: 00:01:00.777 ==> default: -- Graphics Type: vnc 00:01:00.777 ==> default: -- Graphics Port: -1 00:01:00.777 ==> default: -- Graphics IP: 127.0.0.1 00:01:00.777 ==> default: -- Graphics Password: Not defined 00:01:00.777 ==> default: -- Video Type: cirrus 00:01:00.777 ==> default: -- Video VRAM: 9216 00:01:00.777 ==> default: -- Sound Type: 00:01:00.777 ==> default: -- Keymap: en-us 00:01:00.777 ==> default: -- TPM Path: 00:01:00.777 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:00.777 ==> default: -- Command line args: 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:00.777 ==> default: -> value=-drive, 00:01:00.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:00.777 ==> default: -> value=-device, 00:01:00.777 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.777 ==> default: Creating shared folders metadata... 00:01:00.777 ==> default: Starting domain. 00:01:02.676 ==> default: Waiting for domain to get an IP address... 00:01:17.621 ==> default: Waiting for SSH to become available... 00:01:19.002 ==> default: Configuring and enabling network interfaces... 00:01:24.285 default: SSH address: 192.168.121.122:22 00:01:24.285 default: SSH username: vagrant 00:01:24.285 default: SSH auth method: private key 00:01:26.187 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.369 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:39.665 ==> default: Mounting SSHFS shared folder... 00:01:41.038 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.038 ==> default: Checking Mount.. 00:01:42.409 ==> default: Folder Successfully Mounted! 00:01:42.409 ==> default: Running provisioner: file... 00:01:43.353 default: ~/.gitconfig => .gitconfig 00:01:43.610 00:01:43.610 SUCCESS! 00:01:43.610 00:01:43.610 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:43.610 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.610 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:43.610 00:01:43.619 [Pipeline] } 00:01:43.637 [Pipeline] // stage 00:01:43.646 [Pipeline] dir 00:01:43.646 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:43.648 [Pipeline] { 00:01:43.662 [Pipeline] catchError 00:01:43.664 [Pipeline] { 00:01:43.677 [Pipeline] sh 00:01:43.953 + vagrant ssh-config --host vagrant 00:01:43.953 + sed -ne /^Host/,$p 00:01:43.953 + tee ssh_conf 00:01:48.155 Host vagrant 00:01:48.155 HostName 192.168.121.122 00:01:48.155 User vagrant 00:01:48.155 Port 22 00:01:48.155 UserKnownHostsFile /dev/null 00:01:48.155 StrictHostKeyChecking no 00:01:48.155 PasswordAuthentication no 00:01:48.155 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:48.155 IdentitiesOnly yes 00:01:48.155 LogLevel FATAL 00:01:48.155 ForwardAgent yes 00:01:48.155 ForwardX11 yes 00:01:48.155 00:01:48.169 [Pipeline] withEnv 00:01:48.171 [Pipeline] { 00:01:48.188 [Pipeline] sh 00:01:48.498 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.498 source /etc/os-release 00:01:48.498 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.498 # Minimal, systemd-like check. 00:01:48.498 if [[ -e /.dockerenv ]]; then 00:01:48.498 # Clear garbage from the node's name: 00:01:48.498 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.498 # $HOSTNAME is the actual container id 00:01:48.498 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.498 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.498 # We can assume this is a mount from a host where container is running, 00:01:48.498 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.498 container="$(< /etc/hostname) ($agent)" 00:01:48.498 else 00:01:48.498 # Fallback 00:01:48.498 container=$agent 00:01:48.498 fi 00:01:48.498 fi 00:01:48.498 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.498 00:01:48.507 [Pipeline] } 00:01:48.526 [Pipeline] // withEnv 00:01:48.535 [Pipeline] setCustomBuildProperty 00:01:48.552 [Pipeline] stage 00:01:48.554 [Pipeline] { (Tests) 00:01:48.574 [Pipeline] sh 00:01:48.853 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.124 [Pipeline] sh 00:01:49.401 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.672 [Pipeline] timeout 00:01:49.673 Timeout set to expire in 40 min 00:01:49.674 [Pipeline] { 00:01:49.690 [Pipeline] sh 00:01:49.966 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.530 HEAD is now at a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:50.539 [Pipeline] sh 00:01:50.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.084 [Pipeline] sh 00:01:51.361 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.635 [Pipeline] sh 00:01:51.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:52.171 ++ readlink -f spdk_repo 00:01:52.171 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.171 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.171 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.171 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.171 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.171 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.171 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.171 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:52.171 + cd /home/vagrant/spdk_repo 00:01:52.171 + source /etc/os-release 00:01:52.171 ++ NAME='Fedora Linux' 00:01:52.171 ++ VERSION='38 (Cloud Edition)' 00:01:52.171 ++ ID=fedora 00:01:52.171 ++ VERSION_ID=38 00:01:52.171 ++ VERSION_CODENAME= 00:01:52.171 ++ PLATFORM_ID=platform:f38 00:01:52.171 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:52.171 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.171 ++ LOGO=fedora-logo-icon 00:01:52.171 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:52.171 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.171 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:52.171 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.171 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.171 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.171 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:52.171 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.171 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:52.171 ++ SUPPORT_END=2024-05-14 00:01:52.171 ++ VARIANT='Cloud Edition' 00:01:52.171 ++ VARIANT_ID=cloud 00:01:52.171 + uname -a 00:01:52.171 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:52.171 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.687 Hugepages 00:01:52.687 node hugesize free / total 00:01:52.687 node0 1048576kB 0 / 0 00:01:52.687 node0 2048kB 0 / 0 00:01:52.687 00:01:52.687 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.944 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.944 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.944 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:52.944 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:52.944 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:52.944 + rm -f /tmp/spdk-ld-path 00:01:52.944 + source autorun-spdk.conf 00:01:52.944 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.944 ++ SPDK_TEST_NVME=1 00:01:52.944 ++ SPDK_TEST_FTL=1 00:01:52.944 ++ SPDK_TEST_ISAL=1 00:01:52.944 ++ SPDK_RUN_ASAN=1 00:01:52.944 ++ SPDK_RUN_UBSAN=1 00:01:52.944 ++ SPDK_TEST_XNVME=1 00:01:52.944 ++ SPDK_TEST_NVME_FDP=1 00:01:52.944 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:52.944 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.944 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.944 ++ RUN_NIGHTLY=1 00:01:52.944 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.944 + [[ -n '' ]] 00:01:52.944 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.944 + for M in /var/spdk/build-*-manifest.txt 00:01:52.944 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.944 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.944 + for M in /var/spdk/build-*-manifest.txt 00:01:52.944 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.944 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.944 ++ uname 00:01:52.944 + [[ Linux == \L\i\n\u\x ]] 00:01:52.944 + sudo dmesg -T 00:01:52.944 + sudo dmesg --clear 00:01:52.944 + dmesg_pid=6041 00:01:52.944 + [[ Fedora Linux == FreeBSD ]] 00:01:52.944 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.944 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.944 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.944 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.944 + sudo dmesg -Tw 00:01:52.944 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.944 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.944 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.944 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.944 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.944 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.944 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.944 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.944 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.944 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.944 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.944 Test configuration: 00:01:52.944 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.944 SPDK_TEST_NVME=1 00:01:52.944 SPDK_TEST_FTL=1 00:01:52.944 SPDK_TEST_ISAL=1 00:01:52.944 SPDK_RUN_ASAN=1 00:01:52.944 SPDK_RUN_UBSAN=1 00:01:52.944 SPDK_TEST_XNVME=1 00:01:52.944 SPDK_TEST_NVME_FDP=1 00:01:52.944 SPDK_TEST_NATIVE_DPDK=main 00:01:52.944 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.944 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.202 RUN_NIGHTLY=1 17:08:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.202 17:08:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.202 17:08:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.202 17:08:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.202 17:08:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.202 17:08:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.202 17:08:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.202 17:08:03 -- paths/export.sh@5 -- $ export PATH 00:01:53.202 17:08:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.202 17:08:03 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.202 17:08:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:53.202 17:08:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721063283.XXXXXX 00:01:53.202 17:08:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721063283.nyRh8z 00:01:53.202 17:08:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:53.202 17:08:03 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:01:53.202 17:08:03 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:53.202 17:08:03 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:53.202 17:08:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.202 17:08:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.202 17:08:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:53.202 17:08:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:53.202 17:08:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.202 17:08:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:01:53.202 17:08:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:53.202 17:08:03 -- pm/common@17 -- $ local monitor 00:01:53.202 17:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.202 17:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.202 17:08:03 -- pm/common@25 -- $ sleep 1 00:01:53.202 17:08:03 -- pm/common@21 -- $ date +%s 00:01:53.202 17:08:03 -- pm/common@21 -- $ date +%s 00:01:53.202 17:08:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721063283 00:01:53.202 17:08:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721063283 00:01:53.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721063283_collect-vmstat.pm.log 00:01:53.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721063283_collect-cpu-load.pm.log 00:01:54.136 17:08:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:54.136 17:08:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.136 17:08:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.136 17:08:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.136 17:08:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.136 Mon Jul 15 05:08:04 PM UTC 2024 00:01:54.136 17:08:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.136 v24.09-pre-209-ga95bbf233 00:01:54.136 17:08:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.136 17:08:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.136 17:08:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:54.136 17:08:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.136 17:08:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.136 ************************************ 00:01:54.136 START TEST asan 00:01:54.136 ************************************ 00:01:54.136 using asan 00:01:54.136 17:08:04 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:54.136 00:01:54.136 real 0m0.000s 00:01:54.136 user 0m0.000s 00:01:54.136 sys 0m0.000s 00:01:54.136 17:08:04 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:54.136 ************************************ 00:01:54.136 END TEST asan 00:01:54.136 ************************************ 00:01:54.136 17:08:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.136 17:08:04 -- common/autotest_common.sh@1142 -- $ return 0 00:01:54.136 17:08:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.136 17:08:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.136 17:08:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:54.136 17:08:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.136 17:08:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.395 ************************************ 00:01:54.395 START TEST ubsan 00:01:54.395 ************************************ 00:01:54.395 using ubsan 00:01:54.395 17:08:05 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:54.395 00:01:54.395 real 0m0.000s 00:01:54.395 user 0m0.000s 00:01:54.395 sys 0m0.000s 00:01:54.395 17:08:05 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:54.395 ************************************ 00:01:54.395 17:08:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.395 END TEST ubsan 00:01:54.395 ************************************ 00:01:54.395 17:08:05 -- common/autotest_common.sh@1142 -- $ return 0 00:01:54.395 17:08:05 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:54.395 17:08:05 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.395 17:08:05 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.395 17:08:05 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:54.395 17:08:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.395 17:08:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.395 ************************************ 00:01:54.395 START TEST build_native_dpdk 00:01:54.395 ************************************ 00:01:54.395 17:08:05 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:54.395 fa8d2f7f28 version: 24.07-rc2 00:01:54.395 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:54.395 2227c0ed9a maintainers: update for Microsoft drivers 00:01:54.395 8385370337 maintainers: update for Arm 00:01:54.395 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.395 17:08:05 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:54.395 17:08:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:54.396 17:08:05 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.396 patching file config/rte_config.h 00:01:54.396 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.396 17:08:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.660 The Meson build system 00:01:59.660 Version: 1.3.1 00:01:59.660 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:59.660 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:59.660 Build type: native build 00:01:59.660 Program cat found: YES (/usr/bin/cat) 00:01:59.660 Project name: DPDK 00:01:59.660 Project version: 24.07.0-rc2 00:01:59.660 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.660 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:59.660 Host machine cpu family: x86_64 00:01:59.660 Host machine cpu: x86_64 00:01:59.660 Message: ## Building in Developer Mode ## 00:01:59.660 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.660 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:59.660 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.660 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:59.660 Program cat found: YES (/usr/bin/cat) 00:01:59.660 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.660 Compiler for C supports arguments -march=native: YES 00:01:59.660 Checking for size of "void *" : 8 00:01:59.660 Checking for size of "void *" : 8 (cached) 00:01:59.660 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.660 Library m found: YES 00:01:59.660 Library numa found: YES 00:01:59.660 Has header "numaif.h" : YES 00:01:59.660 Library fdt found: NO 00:01:59.660 Library execinfo found: NO 00:01:59.660 Has header "execinfo.h" : YES 00:01:59.660 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.660 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.660 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.660 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.660 Run-time dependency openssl found: YES 3.0.9 00:01:59.660 Run-time dependency libpcap found: YES 1.10.4 00:01:59.660 Has header "pcap.h" with dependency libpcap: YES 00:01:59.660 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.660 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.660 Compiler for C supports arguments -Wformat: YES 00:01:59.660 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.660 Compiler for C supports arguments -Wformat-security: NO 00:01:59.660 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.660 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.660 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.660 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.660 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.660 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.660 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.660 Compiler for C supports arguments -Wundef: YES 00:01:59.660 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.660 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.660 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.660 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.660 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.660 Program objdump found: YES (/usr/bin/objdump) 00:01:59.660 Compiler for C supports arguments -mavx512f: YES 00:01:59.660 Checking if "AVX512 checking" compiles: YES 00:01:59.660 Fetching value of define "__SSE4_2__" : 1 00:01:59.660 Fetching value of define "__AES__" : 1 00:01:59.660 Fetching value of define "__AVX__" : 1 00:01:59.660 Fetching value of define "__AVX2__" : 1 00:01:59.660 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.660 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.660 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.660 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.660 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.660 Fetching value of define "__PCLMUL__" : 1 00:01:59.660 Fetching value of define "__RDRND__" : 1 00:01:59.660 Fetching value of define "__RDSEED__" : 1 00:01:59.660 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.660 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.660 Message: lib/log: Defining dependency "log" 00:01:59.660 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.660 Message: lib/argparse: Defining dependency "argparse" 00:01:59.660 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.660 Checking for function "getentropy" : NO 00:01:59.660 Message: lib/eal: Defining dependency "eal" 00:01:59.660 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:59.660 Message: lib/ring: Defining dependency "ring" 00:01:59.660 Message: lib/rcu: Defining dependency "rcu" 00:01:59.660 Message: lib/mempool: Defining dependency "mempool" 00:01:59.660 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.660 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.660 Compiler for C supports arguments -mpclmul: YES 00:01:59.660 Compiler for C supports arguments -maes: YES 00:01:59.660 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.660 Compiler for C supports arguments -mavx512bw: YES 00:01:59.660 Compiler for C supports arguments -mavx512dq: YES 00:01:59.660 Compiler for C supports arguments -mavx512vl: YES 00:01:59.660 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.660 Compiler for C supports arguments -mavx2: YES 00:01:59.660 Compiler for C supports arguments -mavx: YES 00:01:59.660 Message: lib/net: Defining dependency "net" 00:01:59.660 Message: lib/meter: Defining dependency "meter" 00:01:59.660 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.660 Message: lib/pci: Defining dependency "pci" 00:01:59.660 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.660 Message: lib/metrics: Defining dependency "metrics" 00:01:59.660 Message: lib/hash: Defining dependency "hash" 00:01:59.660 Message: lib/timer: Defining dependency "timer" 00:01:59.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:59.660 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:59.660 Message: lib/acl: Defining dependency "acl" 00:01:59.660 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.660 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.660 Run-time dependency libelf found: YES 0.190 00:01:59.660 Message: lib/bpf: Defining dependency "bpf" 00:01:59.660 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.660 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.660 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.660 Message: lib/distributor: Defining dependency "distributor" 00:01:59.660 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.660 Message: lib/efd: Defining dependency "efd" 00:01:59.660 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.660 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:59.660 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.660 Message: lib/gro: Defining dependency "gro" 00:01:59.660 Message: lib/gso: Defining dependency "gso" 00:01:59.660 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.660 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.660 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.660 Message: lib/lpm: Defining dependency "lpm" 00:01:59.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.660 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.660 Message: lib/member: Defining dependency "member" 00:01:59.660 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.660 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.660 Message: lib/power: Defining dependency "power" 00:01:59.660 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.660 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.660 Message: lib/mldev: Defining dependency "mldev" 00:01:59.660 Message: lib/rib: Defining dependency "rib" 00:01:59.660 Message: lib/reorder: Defining dependency "reorder" 00:01:59.660 Message: lib/sched: Defining dependency "sched" 00:01:59.660 Message: lib/security: Defining dependency "security" 00:01:59.660 Message: lib/stack: Defining dependency "stack" 00:01:59.660 Has header "linux/userfaultfd.h" : YES 00:01:59.660 Has header "linux/vduse.h" : YES 00:01:59.660 Message: lib/vhost: Defining dependency "vhost" 00:01:59.660 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.660 Message: lib/pdcp: Defining dependency "pdcp" 00:01:59.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.660 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.660 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:59.660 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:59.660 Message: lib/fib: Defining dependency "fib" 00:01:59.660 Message: lib/port: Defining dependency "port" 00:01:59.660 Message: lib/pdump: Defining dependency "pdump" 00:01:59.660 Message: lib/table: Defining dependency "table" 00:01:59.660 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.660 Message: lib/graph: Defining dependency "graph" 00:01:59.660 Message: lib/node: Defining dependency "node" 00:01:59.660 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.558 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.558 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.558 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.558 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:01.558 Compiler for C supports arguments -Wno-unused-value: YES 00:02:01.558 Compiler for C supports arguments -Wno-format: YES 00:02:01.558 Compiler for C supports arguments -Wno-format-security: YES 00:02:01.558 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:01.558 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:01.558 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:01.558 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:01.558 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.558 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.558 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.558 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:01.558 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:01.558 Has header "sys/epoll.h" : YES 00:02:01.558 Program doxygen found: YES (/usr/bin/doxygen) 00:02:01.558 Configuring doxy-api-html.conf using configuration 00:02:01.558 Configuring doxy-api-man.conf using configuration 00:02:01.558 Program mandb found: YES (/usr/bin/mandb) 00:02:01.558 Program sphinx-build found: NO 00:02:01.558 Configuring rte_build_config.h using configuration 00:02:01.558 Message: 00:02:01.558 ================= 00:02:01.558 Applications Enabled 00:02:01.558 ================= 00:02:01.558 00:02:01.558 apps: 00:02:01.558 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:01.558 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:01.558 test-pmd, test-regex, test-sad, test-security-perf, 00:02:01.558 00:02:01.558 Message: 00:02:01.558 ================= 00:02:01.558 Libraries Enabled 00:02:01.558 ================= 00:02:01.558 00:02:01.558 libs: 00:02:01.558 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:01.558 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:01.559 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:01.559 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:01.559 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:01.559 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:01.559 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:01.559 graph, node, 00:02:01.559 00:02:01.559 Message: 00:02:01.559 =============== 00:02:01.559 Drivers Enabled 00:02:01.559 =============== 00:02:01.559 00:02:01.559 common: 00:02:01.559 00:02:01.559 bus: 00:02:01.559 pci, vdev, 00:02:01.559 mempool: 00:02:01.559 ring, 00:02:01.559 dma: 00:02:01.559 00:02:01.559 net: 00:02:01.559 i40e, 00:02:01.559 raw: 00:02:01.559 00:02:01.559 crypto: 00:02:01.559 00:02:01.559 compress: 00:02:01.559 00:02:01.559 regex: 00:02:01.559 00:02:01.559 ml: 00:02:01.559 00:02:01.559 vdpa: 00:02:01.559 00:02:01.559 event: 00:02:01.559 00:02:01.559 baseband: 00:02:01.559 00:02:01.559 gpu: 00:02:01.559 00:02:01.559 00:02:01.559 Message: 00:02:01.559 ================= 00:02:01.559 Content Skipped 00:02:01.559 ================= 00:02:01.559 00:02:01.559 apps: 00:02:01.559 00:02:01.559 libs: 00:02:01.559 00:02:01.559 drivers: 00:02:01.559 common/cpt: not in enabled drivers build config 00:02:01.559 common/dpaax: not in enabled drivers build config 00:02:01.559 common/iavf: not in enabled drivers build config 00:02:01.559 common/idpf: not in enabled drivers build config 00:02:01.559 common/ionic: not in enabled drivers build config 00:02:01.559 common/mvep: not in enabled drivers build config 00:02:01.559 common/octeontx: not in enabled drivers build config 00:02:01.559 bus/auxiliary: not in enabled drivers build config 00:02:01.559 bus/cdx: not in enabled drivers build config 00:02:01.559 bus/dpaa: not in enabled drivers build config 00:02:01.559 bus/fslmc: not in enabled drivers build config 00:02:01.559 bus/ifpga: not in enabled drivers build config 00:02:01.559 bus/platform: not in enabled drivers build config 00:02:01.559 bus/uacce: not in enabled drivers build config 00:02:01.559 bus/vmbus: not in enabled drivers build config 00:02:01.559 common/cnxk: not in enabled drivers build config 00:02:01.559 common/mlx5: not in enabled drivers build config 00:02:01.559 common/nfp: not in enabled drivers build config 00:02:01.559 common/nitrox: not in enabled drivers build config 00:02:01.559 common/qat: not in enabled drivers build config 00:02:01.559 common/sfc_efx: not in enabled drivers build config 00:02:01.559 mempool/bucket: not in enabled drivers build config 00:02:01.559 mempool/cnxk: not in enabled drivers build config 00:02:01.559 mempool/dpaa: not in enabled drivers build config 00:02:01.559 mempool/dpaa2: not in enabled drivers build config 00:02:01.559 mempool/octeontx: not in enabled drivers build config 00:02:01.559 mempool/stack: not in enabled drivers build config 00:02:01.559 dma/cnxk: not in enabled drivers build config 00:02:01.559 dma/dpaa: not in enabled drivers build config 00:02:01.559 dma/dpaa2: not in enabled drivers build config 00:02:01.559 dma/hisilicon: not in enabled drivers build config 00:02:01.559 dma/idxd: not in enabled drivers build config 00:02:01.559 dma/ioat: not in enabled drivers build config 00:02:01.559 dma/odm: not in enabled drivers build config 00:02:01.559 dma/skeleton: not in enabled drivers build config 00:02:01.559 net/af_packet: not in enabled drivers build config 00:02:01.559 net/af_xdp: not in enabled drivers build config 00:02:01.559 net/ark: not in enabled drivers build config 00:02:01.559 net/atlantic: not in enabled drivers build config 00:02:01.559 net/avp: not in enabled drivers build config 00:02:01.559 net/axgbe: not in enabled drivers build config 00:02:01.559 net/bnx2x: not in enabled drivers build config 00:02:01.559 net/bnxt: not in enabled drivers build config 00:02:01.559 net/bonding: not in enabled drivers build config 00:02:01.559 net/cnxk: not in enabled drivers build config 00:02:01.559 net/cpfl: not in enabled drivers build config 00:02:01.559 net/cxgbe: not in enabled drivers build config 00:02:01.559 net/dpaa: not in enabled drivers build config 00:02:01.559 net/dpaa2: not in enabled drivers build config 00:02:01.559 net/e1000: not in enabled drivers build config 00:02:01.559 net/ena: not in enabled drivers build config 00:02:01.559 net/enetc: not in enabled drivers build config 00:02:01.559 net/enetfec: not in enabled drivers build config 00:02:01.559 net/enic: not in enabled drivers build config 00:02:01.559 net/failsafe: not in enabled drivers build config 00:02:01.559 net/fm10k: not in enabled drivers build config 00:02:01.559 net/gve: not in enabled drivers build config 00:02:01.559 net/hinic: not in enabled drivers build config 00:02:01.559 net/hns3: not in enabled drivers build config 00:02:01.559 net/iavf: not in enabled drivers build config 00:02:01.559 net/ice: not in enabled drivers build config 00:02:01.559 net/idpf: not in enabled drivers build config 00:02:01.559 net/igc: not in enabled drivers build config 00:02:01.559 net/ionic: not in enabled drivers build config 00:02:01.559 net/ipn3ke: not in enabled drivers build config 00:02:01.559 net/ixgbe: not in enabled drivers build config 00:02:01.559 net/mana: not in enabled drivers build config 00:02:01.559 net/memif: not in enabled drivers build config 00:02:01.559 net/mlx4: not in enabled drivers build config 00:02:01.559 net/mlx5: not in enabled drivers build config 00:02:01.559 net/mvneta: not in enabled drivers build config 00:02:01.559 net/mvpp2: not in enabled drivers build config 00:02:01.559 net/netvsc: not in enabled drivers build config 00:02:01.559 net/nfb: not in enabled drivers build config 00:02:01.559 net/nfp: not in enabled drivers build config 00:02:01.559 net/ngbe: not in enabled drivers build config 00:02:01.559 net/null: not in enabled drivers build config 00:02:01.559 net/octeontx: not in enabled drivers build config 00:02:01.559 net/octeon_ep: not in enabled drivers build config 00:02:01.559 net/pcap: not in enabled drivers build config 00:02:01.559 net/pfe: not in enabled drivers build config 00:02:01.559 net/qede: not in enabled drivers build config 00:02:01.559 net/ring: not in enabled drivers build config 00:02:01.559 net/sfc: not in enabled drivers build config 00:02:01.559 net/softnic: not in enabled drivers build config 00:02:01.559 net/tap: not in enabled drivers build config 00:02:01.559 net/thunderx: not in enabled drivers build config 00:02:01.559 net/txgbe: not in enabled drivers build config 00:02:01.559 net/vdev_netvsc: not in enabled drivers build config 00:02:01.559 net/vhost: not in enabled drivers build config 00:02:01.559 net/virtio: not in enabled drivers build config 00:02:01.559 net/vmxnet3: not in enabled drivers build config 00:02:01.559 raw/cnxk_bphy: not in enabled drivers build config 00:02:01.559 raw/cnxk_gpio: not in enabled drivers build config 00:02:01.559 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:01.559 raw/ifpga: not in enabled drivers build config 00:02:01.559 raw/ntb: not in enabled drivers build config 00:02:01.559 raw/skeleton: not in enabled drivers build config 00:02:01.559 crypto/armv8: not in enabled drivers build config 00:02:01.559 crypto/bcmfs: not in enabled drivers build config 00:02:01.559 crypto/caam_jr: not in enabled drivers build config 00:02:01.559 crypto/ccp: not in enabled drivers build config 00:02:01.559 crypto/cnxk: not in enabled drivers build config 00:02:01.559 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.559 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.559 crypto/ionic: not in enabled drivers build config 00:02:01.559 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.559 crypto/mlx5: not in enabled drivers build config 00:02:01.559 crypto/mvsam: not in enabled drivers build config 00:02:01.559 crypto/nitrox: not in enabled drivers build config 00:02:01.559 crypto/null: not in enabled drivers build config 00:02:01.559 crypto/octeontx: not in enabled drivers build config 00:02:01.559 crypto/openssl: not in enabled drivers build config 00:02:01.559 crypto/scheduler: not in enabled drivers build config 00:02:01.559 crypto/uadk: not in enabled drivers build config 00:02:01.559 crypto/virtio: not in enabled drivers build config 00:02:01.559 compress/isal: not in enabled drivers build config 00:02:01.559 compress/mlx5: not in enabled drivers build config 00:02:01.559 compress/nitrox: not in enabled drivers build config 00:02:01.559 compress/octeontx: not in enabled drivers build config 00:02:01.559 compress/uadk: not in enabled drivers build config 00:02:01.559 compress/zlib: not in enabled drivers build config 00:02:01.559 regex/mlx5: not in enabled drivers build config 00:02:01.559 regex/cn9k: not in enabled drivers build config 00:02:01.559 ml/cnxk: not in enabled drivers build config 00:02:01.559 vdpa/ifc: not in enabled drivers build config 00:02:01.559 vdpa/mlx5: not in enabled drivers build config 00:02:01.559 vdpa/nfp: not in enabled drivers build config 00:02:01.559 vdpa/sfc: not in enabled drivers build config 00:02:01.559 event/cnxk: not in enabled drivers build config 00:02:01.559 event/dlb2: not in enabled drivers build config 00:02:01.559 event/dpaa: not in enabled drivers build config 00:02:01.559 event/dpaa2: not in enabled drivers build config 00:02:01.559 event/dsw: not in enabled drivers build config 00:02:01.559 event/opdl: not in enabled drivers build config 00:02:01.559 event/skeleton: not in enabled drivers build config 00:02:01.559 event/sw: not in enabled drivers build config 00:02:01.559 event/octeontx: not in enabled drivers build config 00:02:01.559 baseband/acc: not in enabled drivers build config 00:02:01.559 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:01.559 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:01.559 baseband/la12xx: not in enabled drivers build config 00:02:01.559 baseband/null: not in enabled drivers build config 00:02:01.559 baseband/turbo_sw: not in enabled drivers build config 00:02:01.559 gpu/cuda: not in enabled drivers build config 00:02:01.559 00:02:01.559 00:02:01.559 Build targets in project: 224 00:02:01.559 00:02:01.559 DPDK 24.07.0-rc2 00:02:01.559 00:02:01.559 User defined options 00:02:01.559 libdir : lib 00:02:01.559 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:01.559 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:01.559 c_link_args : 00:02:01.559 enable_docs : false 00:02:01.559 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.559 enable_kmods : false 00:02:01.559 machine : native 00:02:01.559 tests : false 00:02:01.559 00:02:01.559 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.559 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:01.560 17:08:12 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:01.560 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:01.560 [1/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.817 [2/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.817 [3/723] Linking static target lib/librte_kvargs.a 00:02:01.817 [4/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.817 [5/723] Linking static target lib/librte_log.a 00:02:01.817 [6/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.817 [7/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:01.817 [8/723] Linking static target lib/librte_argparse.a 00:02:02.074 [9/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.074 [10/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.074 [11/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.074 [12/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.074 [13/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.074 [14/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.074 [15/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.332 [16/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.332 [17/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.332 [18/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.332 [19/723] Linking target lib/librte_log.so.24.2 00:02:02.594 [20/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.852 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.852 [22/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:02.852 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.852 [24/723] Linking target lib/librte_kvargs.so.24.2 00:02:02.852 [25/723] Linking target lib/librte_argparse.so.24.2 00:02:02.852 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.852 [27/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:03.110 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.110 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.110 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.110 [31/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.110 [32/723] Linking static target lib/librte_telemetry.a 00:02:03.110 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.110 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.367 [35/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.625 [36/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.625 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.625 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.625 [39/723] Linking target lib/librte_telemetry.so.24.2 00:02:03.625 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.625 [41/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.625 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.884 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.884 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.884 [45/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:03.884 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.884 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.884 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.143 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.143 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.402 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.402 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.402 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.402 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.402 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.660 [56/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.660 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.919 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.919 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.919 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.919 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.919 [62/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.177 [63/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.177 [64/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.177 [65/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.177 [66/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.177 [67/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.177 [68/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.435 [69/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.435 [70/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.693 [71/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.693 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.693 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.693 [74/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.951 [75/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.951 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.951 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.951 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.951 [79/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.951 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.209 [81/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.209 [82/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.210 [83/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.210 [84/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.468 [85/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:06.468 [86/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.468 [87/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.726 [88/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.726 [89/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.726 [90/723] Linking static target lib/librte_ring.a 00:02:06.984 [91/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.984 [92/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.984 [93/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.984 [94/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.984 [95/723] Linking static target lib/librte_eal.a 00:02:07.243 [96/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.243 [97/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.243 [98/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.243 [99/723] Linking static target lib/librte_mempool.a 00:02:07.243 [100/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.500 [101/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.500 [102/723] Linking static target lib/librte_rcu.a 00:02:07.758 [103/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.758 [104/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.758 [105/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.016 [106/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.016 [107/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.016 [108/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.016 [109/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.016 [110/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.016 [111/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.016 [112/723] Linking static target lib/librte_mbuf.a 00:02:08.016 [113/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.274 [114/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.274 [115/723] Linking static target lib/librte_net.a 00:02:08.532 [116/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.532 [117/723] Linking static target lib/librte_meter.a 00:02:08.532 [118/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.532 [119/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.791 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.791 [121/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.791 [122/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.791 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.049 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.307 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.564 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.822 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.822 [128/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.822 [129/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:10.080 [130/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.080 [131/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.080 [132/723] Linking static target lib/librte_pci.a 00:02:10.080 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.080 [134/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.338 [135/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.338 [136/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:10.338 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.338 [138/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.338 [139/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.597 [140/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.597 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.597 [142/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.597 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:10.597 [144/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.597 [145/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.597 [146/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.856 [147/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.856 [148/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.856 [149/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.856 [150/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.115 [151/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.115 [152/723] Linking static target lib/librte_cmdline.a 00:02:11.374 [153/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:11.374 [154/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:11.374 [155/723] Linking static target lib/librte_metrics.a 00:02:11.374 [156/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.374 [157/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.374 [158/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.632 [159/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.890 [160/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.891 [161/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.149 [162/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.149 [163/723] Linking static target lib/librte_timer.a 00:02:12.714 [164/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.714 [165/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:12.714 [166/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:12.973 [167/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:13.245 [168/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:13.245 [169/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.502 [170/723] Linking static target lib/librte_ethdev.a 00:02:13.502 [171/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:13.758 [172/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:13.758 [173/723] Linking static target lib/librte_bitratestats.a 00:02:13.758 [174/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.758 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:13.758 [176/723] Linking static target lib/librte_hash.a 00:02:13.758 [177/723] Linking static target lib/librte_bbdev.a 00:02:13.758 [178/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:14.015 [179/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.015 [180/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.274 [181/723] Linking target lib/librte_eal.so.24.2 00:02:14.274 [182/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:14.274 [183/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:14.274 [184/723] Linking static target lib/acl/libavx2_tmp.a 00:02:14.274 [185/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:14.274 [186/723] Linking target lib/librte_ring.so.24.2 00:02:14.274 [187/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.532 [188/723] Linking target lib/librte_meter.so.24.2 00:02:14.532 [189/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.532 [190/723] Linking target lib/librte_pci.so.24.2 00:02:14.532 [191/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:14.532 [192/723] Linking target lib/librte_rcu.so.24.2 00:02:14.533 [193/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:14.533 [194/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:14.533 [195/723] Linking target lib/librte_mempool.so.24.2 00:02:14.533 [196/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:14.533 [197/723] Linking target lib/librte_timer.so.24.2 00:02:14.792 [198/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:14.792 [199/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:14.792 [200/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:14.792 [201/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:14.792 [202/723] Linking static target lib/acl/libavx512_tmp.a 00:02:14.792 [203/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:14.792 [204/723] Linking target lib/librte_mbuf.so.24.2 00:02:14.792 [205/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:14.792 [206/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:14.792 [207/723] Linking static target lib/librte_acl.a 00:02:15.050 [208/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:15.050 [209/723] Linking target lib/librte_net.so.24.2 00:02:15.050 [210/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:15.050 [211/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:15.050 [212/723] Linking target lib/librte_bbdev.so.24.2 00:02:15.308 [213/723] Linking target lib/librte_cmdline.so.24.2 00:02:15.308 [214/723] Linking static target lib/librte_cfgfile.a 00:02:15.308 [215/723] Linking target lib/librte_hash.so.24.2 00:02:15.308 [216/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:15.309 [217/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.309 [218/723] Linking target lib/librte_acl.so.24.2 00:02:15.309 [219/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:15.566 [220/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:15.566 [221/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:15.566 [222/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.566 [223/723] Linking target lib/librte_cfgfile.so.24.2 00:02:15.566 [224/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:15.823 [225/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.823 [226/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.080 [227/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:16.080 [228/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:16.080 [229/723] Linking static target lib/librte_bpf.a 00:02:16.080 [230/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.080 [231/723] Linking static target lib/librte_compressdev.a 00:02:16.339 [232/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.339 [233/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.596 [234/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:16.596 [235/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:16.596 [236/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.596 [237/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:16.596 [238/723] Linking static target lib/librte_distributor.a 00:02:16.854 [239/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.854 [240/723] Linking target lib/librte_compressdev.so.24.2 00:02:16.854 [241/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.854 [242/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.112 [243/723] Linking target lib/librte_distributor.so.24.2 00:02:17.112 [244/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.112 [245/723] Linking static target lib/librte_dmadev.a 00:02:17.112 [246/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:17.723 [247/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:17.723 [248/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.723 [249/723] Linking target lib/librte_dmadev.so.24.2 00:02:17.723 [250/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:17.723 [251/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:17.980 [252/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:17.980 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:17.980 [254/723] Linking static target lib/librte_efd.a 00:02:18.236 [255/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:18.236 [256/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.236 [257/723] Linking static target lib/librte_cryptodev.a 00:02:18.236 [258/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.236 [259/723] Linking target lib/librte_efd.so.24.2 00:02:18.800 [260/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:18.800 [261/723] Linking static target lib/librte_dispatcher.a 00:02:18.800 [262/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:18.800 [263/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.800 [264/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:18.800 [265/723] Linking static target lib/librte_gpudev.a 00:02:18.800 [266/723] Linking target lib/librte_ethdev.so.24.2 00:02:19.057 [267/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:19.057 [268/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:19.057 [269/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:19.057 [270/723] Linking target lib/librte_metrics.so.24.2 00:02:19.057 [271/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.057 [272/723] Linking target lib/librte_bpf.so.24.2 00:02:19.057 [273/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.313 [274/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:19.313 [275/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:19.313 [276/723] Linking target lib/librte_bitratestats.so.24.2 00:02:19.569 [277/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:19.569 [278/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.569 [279/723] Linking target lib/librte_cryptodev.so.24.2 00:02:19.569 [280/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:19.569 [281/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:19.826 [282/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.826 [283/723] Linking target lib/librte_gpudev.so.24.2 00:02:19.826 [284/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.084 [285/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.085 [286/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.085 [287/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.085 [288/723] Linking static target lib/librte_gro.a 00:02:20.085 [289/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.085 [290/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:20.085 [291/723] Linking static target lib/librte_eventdev.a 00:02:20.085 [292/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:20.342 [293/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.342 [294/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.342 [295/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.342 [296/723] Linking target lib/librte_gro.so.24.2 00:02:20.599 [297/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:20.599 [298/723] Linking static target lib/librte_gso.a 00:02:20.600 [299/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:20.600 [300/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.858 [301/723] Linking target lib/librte_gso.so.24.2 00:02:20.858 [302/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.858 [303/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.858 [304/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.858 [305/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:20.858 [306/723] Linking static target lib/librte_jobstats.a 00:02:21.115 [307/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:21.115 [308/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:21.115 [309/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:21.115 [310/723] Linking static target lib/librte_latencystats.a 00:02:21.115 [311/723] Linking static target lib/librte_ip_frag.a 00:02:21.373 [312/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.373 [313/723] Linking target lib/librte_jobstats.so.24.2 00:02:21.373 [314/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.630 [315/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.630 [316/723] Linking target lib/librte_latencystats.so.24.2 00:02:21.630 [317/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:21.630 [318/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:21.630 [319/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:21.630 [320/723] Linking target lib/librte_ip_frag.so.24.2 00:02:21.630 [321/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:21.630 [322/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.630 [323/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:21.888 [324/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.888 [325/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.145 [326/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:22.145 [327/723] Linking static target lib/librte_lpm.a 00:02:22.145 [328/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:22.402 [329/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.402 [330/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.402 [331/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.402 [332/723] Linking target lib/librte_eventdev.so.24.2 00:02:22.660 [333/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.660 [334/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.660 [335/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:22.660 [336/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:22.660 [337/723] Linking static target lib/librte_pcapng.a 00:02:22.660 [338/723] Linking target lib/librte_lpm.so.24.2 00:02:22.660 [339/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.660 [340/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:22.660 [341/723] Linking target lib/librte_dispatcher.so.24.2 00:02:22.660 [342/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:22.918 [343/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.918 [344/723] Linking target lib/librte_pcapng.so.24.2 00:02:22.918 [345/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:22.918 [346/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.176 [347/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:23.176 [348/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.435 [349/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:23.435 [350/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:23.435 [351/723] Linking static target lib/librte_member.a 00:02:23.435 [352/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:23.435 [353/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:23.435 [354/723] Linking static target lib/librte_power.a 00:02:23.435 [355/723] Linking static target lib/librte_regexdev.a 00:02:23.435 [356/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:23.435 [357/723] Linking static target lib/librte_rawdev.a 00:02:23.694 [358/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:23.694 [359/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:23.694 [360/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.694 [361/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:23.694 [362/723] Linking target lib/librte_member.so.24.2 00:02:23.952 [363/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:23.952 [364/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:23.952 [365/723] Linking static target lib/librte_mldev.a 00:02:23.952 [366/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.952 [367/723] Linking target lib/librte_rawdev.so.24.2 00:02:24.209 [368/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:24.209 [369/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.209 [370/723] Linking target lib/librte_power.so.24.2 00:02:24.209 [371/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.209 [372/723] Linking target lib/librte_regexdev.so.24.2 00:02:24.467 [373/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:24.467 [374/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.467 [375/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:24.467 [376/723] Linking static target lib/librte_reorder.a 00:02:24.467 [377/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:24.467 [378/723] Linking static target lib/librte_rib.a 00:02:24.725 [379/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:24.725 [380/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.725 [381/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:24.725 [382/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:24.725 [383/723] Linking static target lib/librte_stack.a 00:02:24.725 [384/723] Linking target lib/librte_reorder.so.24.2 00:02:24.983 [385/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.983 [386/723] Linking static target lib/librte_security.a 00:02:24.983 [387/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:24.983 [388/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.983 [389/723] Linking target lib/librte_rib.so.24.2 00:02:24.983 [390/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.241 [391/723] Linking target lib/librte_stack.so.24.2 00:02:25.241 [392/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.241 [393/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:25.241 [394/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.499 [395/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.499 [396/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.499 [397/723] Linking target lib/librte_security.so.24.2 00:02:25.499 [398/723] Linking target lib/librte_mldev.so.24.2 00:02:25.499 [399/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.499 [400/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:25.757 [401/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:25.757 [402/723] Linking static target lib/librte_sched.a 00:02:26.016 [403/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.016 [404/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.016 [405/723] Linking target lib/librte_sched.so.24.2 00:02:26.273 [406/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.273 [407/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:26.273 [408/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.837 [409/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.837 [410/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:26.837 [411/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.837 [412/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.095 [413/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:27.353 [414/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.353 [415/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:27.353 [416/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:27.353 [417/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:27.610 [418/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.610 [419/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:27.868 [420/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:27.868 [421/723] Linking static target lib/librte_ipsec.a 00:02:28.126 [422/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.126 [423/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:28.126 [424/723] Linking target lib/librte_ipsec.so.24.2 00:02:28.126 [425/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.126 [426/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:28.126 [427/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:28.126 [428/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:28.126 [429/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:28.126 [430/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:28.385 [431/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:28.385 [432/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:28.972 [433/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:28.972 [434/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:28.972 [435/723] Linking static target lib/librte_pdcp.a 00:02:29.230 [436/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:29.230 [437/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:29.230 [438/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:29.230 [439/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:29.230 [440/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:29.230 [441/723] Linking static target lib/librte_fib.a 00:02:29.488 [442/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.488 [443/723] Linking target lib/librte_pdcp.so.24.2 00:02:29.746 [444/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:29.746 [445/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.746 [446/723] Linking target lib/librte_fib.so.24.2 00:02:30.312 [447/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:30.312 [448/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:30.312 [449/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:30.312 [450/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:30.312 [451/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:30.570 [452/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:30.570 [453/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:30.828 [454/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:30.828 [455/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:30.828 [456/723] Linking static target lib/librte_port.a 00:02:31.086 [457/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:31.086 [458/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:31.086 [459/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:31.344 [460/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:31.344 [461/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:31.344 [462/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.602 [463/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:31.602 [464/723] Linking target lib/librte_port.so.24.2 00:02:31.602 [465/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:31.602 [466/723] Linking static target lib/librte_pdump.a 00:02:31.602 [467/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:31.860 [468/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.860 [469/723] Linking target lib/librte_pdump.so.24.2 00:02:31.860 [470/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:32.118 [471/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.118 [472/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:32.375 [473/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:32.375 [474/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:32.375 [475/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:32.375 [476/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:32.647 [477/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:32.647 [478/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:32.647 [479/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:32.914 [480/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:32.914 [481/723] Linking static target lib/librte_table.a 00:02:32.914 [482/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:33.172 [483/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:33.430 [484/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.430 [485/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:33.688 [486/723] Linking target lib/librte_table.so.24.2 00:02:33.688 [487/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:33.946 [488/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:33.946 [489/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:33.946 [490/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:34.204 [491/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:34.462 [492/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:34.720 [493/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:34.720 [494/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:34.720 [495/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:34.720 [496/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:35.046 [497/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.303 [498/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:35.304 [499/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:35.304 [500/723] Linking static target lib/librte_graph.a 00:02:35.304 [501/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:35.561 [502/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:35.819 [503/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:35.819 [504/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:36.077 [505/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:36.077 [506/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.077 [507/723] Linking target lib/librte_graph.so.24.2 00:02:36.334 [508/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:36.334 [509/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:36.592 [510/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:36.592 [511/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:36.592 [512/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:37.156 [513/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:37.156 [514/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:37.156 [515/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:37.156 [516/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:37.156 [517/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.414 [518/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.414 [519/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:37.671 [520/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:37.671 [521/723] Linking static target lib/librte_node.a 00:02:37.671 [522/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.671 [523/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.929 [524/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.929 [525/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.929 [526/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.929 [527/723] Linking target lib/librte_node.so.24.2 00:02:38.186 [528/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.186 [529/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.186 [530/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.186 [531/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.444 [532/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.444 [533/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.444 [534/723] Linking static target drivers/librte_bus_vdev.a 00:02:38.444 [535/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.444 [536/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.444 [537/723] Linking static target drivers/librte_bus_pci.a 00:02:38.701 [538/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:38.702 [539/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.702 [540/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.702 [541/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.702 [542/723] Linking target drivers/librte_bus_vdev.so.24.2 00:02:38.959 [543/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:38.959 [544/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:38.959 [545/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.959 [546/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.959 [547/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.959 [548/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:39.217 [549/723] Linking target drivers/librte_bus_pci.so.24.2 00:02:39.217 [550/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.217 [551/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.217 [552/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:39.217 [553/723] Linking static target drivers/librte_mempool_ring.a 00:02:39.217 [554/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.217 [555/723] Linking target drivers/librte_mempool_ring.so.24.2 00:02:39.475 [556/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:40.039 [557/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:40.297 [558/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:40.297 [559/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:40.297 [560/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:40.555 [561/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:41.488 [562/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:41.488 [563/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:41.746 [564/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:41.746 [565/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:41.746 [566/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:41.746 [567/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.004 [568/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:42.587 [569/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.169 [570/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:43.169 [571/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.169 [572/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:43.427 [573/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.685 [574/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:43.943 [575/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:43.943 [576/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:43.943 [577/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:43.943 [578/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:44.509 [579/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:44.509 [580/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:44.767 [581/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:44.767 [582/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:44.767 [583/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:45.025 [584/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:45.025 [585/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:45.591 [586/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:45.591 [587/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:45.591 [588/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:45.591 [589/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:45.591 [590/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:45.591 [591/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:45.849 [592/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:45.849 [593/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:46.106 [594/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:46.107 [595/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:46.107 [596/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:46.364 [597/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:46.364 [598/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.364 [599/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:46.364 [600/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.364 [601/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:46.364 [602/723] Linking static target drivers/librte_net_i40e.a 00:02:46.930 [603/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.930 [604/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:47.189 [605/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.189 [606/723] Linking static target lib/librte_vhost.a 00:02:47.189 [607/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.189 [608/723] Linking target drivers/librte_net_i40e.so.24.2 00:02:47.447 [609/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:47.705 [610/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:47.705 [611/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:47.962 [612/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:48.220 [613/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:48.220 [614/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:48.220 [615/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:48.477 [616/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.477 [617/723] Linking target lib/librte_vhost.so.24.2 00:02:48.477 [618/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:48.734 [619/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:49.032 [620/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:49.032 [621/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:49.032 [622/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:49.290 [623/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:49.290 [624/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:49.290 [625/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:49.547 [626/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:49.548 [627/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:49.548 [628/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:49.805 [629/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:49.805 [630/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:49.805 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:50.369 [632/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:50.626 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:50.626 [634/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:50.884 [635/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:51.143 [636/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:51.416 [637/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:51.683 [638/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:51.941 [639/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:51.941 [640/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:51.941 [641/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:51.941 [642/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:51.941 [643/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:51.941 [644/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:52.199 [645/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:52.199 [646/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:52.456 [647/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:52.456 [648/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:52.737 [649/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:52.737 [650/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:52.737 [651/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:52.737 [652/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:52.994 [653/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.994 [654/723] Linking static target lib/librte_pipeline.a 00:02:52.994 [655/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:53.252 [656/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:53.252 [657/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:53.252 [658/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:53.510 [659/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:53.510 [660/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:53.510 [661/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:53.510 [662/723] Linking target app/dpdk-dumpcap 00:02:53.768 [663/723] Linking target app/dpdk-graph 00:02:53.768 [664/723] Linking target app/dpdk-pdump 00:02:53.768 [665/723] Linking target app/dpdk-proc-info 00:02:54.026 [666/723] Linking target app/dpdk-test-acl 00:02:54.026 [667/723] Linking target app/dpdk-test-bbdev 00:02:54.026 [668/723] Linking target app/dpdk-test-compress-perf 00:02:54.284 [669/723] Linking target app/dpdk-test-cmdline 00:02:54.284 [670/723] Linking target app/dpdk-test-crypto-perf 00:02:54.284 [671/723] Linking target app/dpdk-test-dma-perf 00:02:54.542 [672/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:54.542 [673/723] Linking target app/dpdk-test-fib 00:02:54.542 [674/723] Linking target app/dpdk-test-flow-perf 00:02:54.542 [675/723] Linking target app/dpdk-test-gpudev 00:02:54.800 [676/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:54.800 [677/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:54.800 [678/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.057 [679/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.315 [680/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:55.315 [681/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.315 [682/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.315 [683/723] Linking target app/dpdk-test-eventdev 00:02:55.315 [684/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.315 [685/723] Linking target app/dpdk-test-mldev 00:02:55.879 [686/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:55.879 [687/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:56.137 [688/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.137 [689/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.397 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.397 [691/723] Linking target app/dpdk-test-pipeline 00:02:56.655 [692/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.655 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.655 [694/723] Linking target lib/librte_pipeline.so.24.2 00:02:56.912 [695/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.169 [696/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.426 [697/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.426 [698/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.683 [699/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.940 [700/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:58.198 [701/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:58.198 [702/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:58.198 [703/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:58.455 [704/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.455 [705/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.019 [706/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:59.019 [707/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:59.275 [708/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:59.275 [709/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:59.839 [710/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:59.839 [711/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:59.839 [712/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:59.839 [713/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:59.839 [714/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:00.097 [715/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:00.097 [716/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.097 [717/723] Linking target app/dpdk-test-regex 00:03:00.097 [718/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:00.663 [719/723] Linking target app/dpdk-test-sad 00:03:00.663 [720/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:00.663 [721/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:01.229 [722/723] Linking target app/dpdk-test-security-perf 00:03:01.487 [723/723] Linking target app/dpdk-testpmd 00:03:01.487 17:09:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:01.487 17:09:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:01.487 17:09:12 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:01.487 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:01.487 [0/1] Installing files. 00:03:01.743 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:01.743 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.743 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.003 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.004 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:02.005 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:02.005 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.005 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.006 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:02.576 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:02.576 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:02.576 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.576 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:02.576 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.576 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.577 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.578 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.579 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:02.580 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:02.580 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:02.580 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:02.580 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:02.580 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:02.580 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:02.580 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:02.580 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:02.580 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:02.580 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:02.580 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:02.580 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:02.580 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:02.580 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:02.580 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:02.580 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:02.580 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:02.580 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:02.580 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:02.580 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:02.580 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:02.580 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:02.580 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:02.580 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:02.580 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:02.580 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:02.580 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:02.580 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:02.580 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:02.580 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:02.580 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:02.580 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:02.580 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:02.580 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:02.580 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:02.580 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:02.580 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:02.580 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:02.580 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:02.580 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:02.580 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:02.580 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:02.580 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:02.580 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:02.580 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:02.580 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:02.580 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:02.580 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:02.580 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:02.580 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:02.580 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:02.580 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:02.580 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:02.580 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:02.580 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:02.580 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:02.580 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:02.580 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:02.580 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:02.580 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:02.580 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:02.580 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:02.580 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:02.580 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:02.580 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:02.580 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:02.580 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:02.580 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:02.580 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:02.580 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:02.580 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:02.580 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:02.580 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:02.580 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:02.580 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:02.580 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:02.580 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:02.580 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:02.580 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:02.580 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:02.580 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:02.580 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:02.580 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:02.580 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:02.580 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:02.580 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:02.580 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:02.580 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:02.580 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:02.581 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:02.581 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:02.581 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:02.581 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:02.581 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:02.581 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:02.581 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:02.581 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:02.581 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:02.581 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:02.581 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:02.581 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:02.581 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:02.581 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:02.581 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:02.581 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:02.581 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:02.581 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:02.581 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:02.581 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:02.581 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:02.581 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:02.581 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:02.581 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:02.581 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:02.581 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:02.581 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:02.581 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:02.581 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:02.581 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:02.581 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:02.581 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:02.581 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:02.581 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:02.581 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:02.581 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:02.581 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:02.581 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:02.581 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:02.581 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:02.581 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:02.581 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:02.581 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:02.581 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:02.581 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:02.581 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:02.581 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:02.581 17:09:13 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:02.581 17:09:13 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:02.581 00:03:02.581 real 1m8.215s 00:03:02.581 user 8m19.827s 00:03:02.581 sys 1m21.142s 00:03:02.581 17:09:13 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:02.581 ************************************ 00:03:02.581 END TEST build_native_dpdk 00:03:02.581 ************************************ 00:03:02.581 17:09:13 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:02.581 17:09:13 -- common/autotest_common.sh@1142 -- $ return 0 00:03:02.581 17:09:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:02.581 17:09:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:02.581 17:09:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:02.581 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:02.838 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:02.838 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:02.838 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:03.402 Using 'verbs' RDMA provider 00:03:17.148 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:29.369 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:29.369 Creating mk/config.mk...done. 00:03:29.369 Creating mk/cc.flags.mk...done. 00:03:29.369 Type 'make' to build. 00:03:29.370 17:09:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:29.370 17:09:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:29.370 17:09:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:29.370 17:09:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.370 ************************************ 00:03:29.370 START TEST make 00:03:29.370 ************************************ 00:03:29.370 17:09:39 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:29.627 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:29.627 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:29.627 meson setup builddir \ 00:03:29.627 -Dwith-libaio=enabled \ 00:03:29.627 -Dwith-liburing=enabled \ 00:03:29.627 -Dwith-libvfn=disabled \ 00:03:29.627 -Dwith-spdk=false && \ 00:03:29.627 meson compile -C builddir && \ 00:03:29.627 cd -) 00:03:29.627 make[1]: Nothing to be done for 'all'. 00:03:32.915 The Meson build system 00:03:32.915 Version: 1.3.1 00:03:32.915 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:32.915 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:32.915 Build type: native build 00:03:32.915 Project name: xnvme 00:03:32.915 Project version: 0.7.3 00:03:32.915 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:32.915 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:32.915 Host machine cpu family: x86_64 00:03:32.915 Host machine cpu: x86_64 00:03:32.915 Message: host_machine.system: linux 00:03:32.915 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:32.915 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:32.915 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:32.915 Run-time dependency threads found: YES 00:03:32.915 Has header "setupapi.h" : NO 00:03:32.915 Has header "linux/blkzoned.h" : YES 00:03:32.915 Has header "linux/blkzoned.h" : YES (cached) 00:03:32.915 Has header "libaio.h" : YES 00:03:32.915 Library aio found: YES 00:03:32.915 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:32.915 Run-time dependency liburing found: YES 2.2 00:03:32.915 Dependency libvfn skipped: feature with-libvfn disabled 00:03:32.915 Run-time dependency appleframeworks found: NO (tried framework) 00:03:32.915 Run-time dependency appleframeworks found: NO (tried framework) 00:03:32.915 Configuring xnvme_config.h using configuration 00:03:32.915 Configuring xnvme.spec using configuration 00:03:32.915 Run-time dependency bash-completion found: YES 2.11 00:03:32.915 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:32.915 Program cp found: YES (/usr/bin/cp) 00:03:32.915 Has header "winsock2.h" : NO 00:03:32.915 Has header "dbghelp.h" : NO 00:03:32.915 Library rpcrt4 found: NO 00:03:32.915 Library rt found: YES 00:03:32.915 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:32.915 Found CMake: /usr/bin/cmake (3.27.7) 00:03:32.915 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:32.915 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:32.915 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:32.915 Build targets in project: 32 00:03:32.915 00:03:32.915 xnvme 0.7.3 00:03:32.915 00:03:32.915 User defined options 00:03:32.915 with-libaio : enabled 00:03:32.915 with-liburing: enabled 00:03:32.915 with-libvfn : disabled 00:03:32.915 with-spdk : false 00:03:32.915 00:03:32.915 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:33.480 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:33.480 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:33.737 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:33.737 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:33.737 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:33.737 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:33.737 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:33.737 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:33.737 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:33.737 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:33.737 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:33.737 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:33.737 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:33.737 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:33.737 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:33.737 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:33.995 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:33.995 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:33.995 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:33.995 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:33.995 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:33.995 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:33.995 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:33.995 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:33.995 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:33.995 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:33.995 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:33.995 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:33.995 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:33.995 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:33.995 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:34.254 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:34.254 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:34.254 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:34.254 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:34.254 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:34.254 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:34.254 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:34.254 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:34.254 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:34.254 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:34.254 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:34.254 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:34.254 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:34.254 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:34.254 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:34.254 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:34.254 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:34.254 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:34.254 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:34.254 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:34.254 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:34.254 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:34.254 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:34.254 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:34.512 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:34.512 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:34.512 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:34.512 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:34.512 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:34.512 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:34.512 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:34.512 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:34.512 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:34.512 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:34.512 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:34.512 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:34.512 [67/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:34.770 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:34.770 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:34.770 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:34.770 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:34.770 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:34.770 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:34.770 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:34.770 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:34.770 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:34.770 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:34.770 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:34.770 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:34.770 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:34.770 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:35.027 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:35.027 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:35.027 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:35.027 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:35.027 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:35.027 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:35.027 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:35.027 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:35.285 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:35.285 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:35.285 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:35.285 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:35.285 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:35.285 [95/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:35.285 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:35.285 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:35.285 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:35.285 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:35.285 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:35.285 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:35.285 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:35.285 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:35.285 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:35.285 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:35.285 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:35.285 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:35.285 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:35.285 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:35.285 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:35.541 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:35.541 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:35.541 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:35.541 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:35.541 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:35.541 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.541 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:35.541 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:35.541 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:35.541 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:35.541 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:35.541 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:35.541 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:35.541 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:35.541 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:35.541 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:35.541 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:35.802 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:35.802 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:35.802 [130/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:35.802 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:35.802 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:35.802 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:35.802 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:35.802 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:35.802 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:35.802 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:35.802 [138/203] Linking target lib/libxnvme.so 00:03:35.802 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:35.802 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:35.802 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:35.802 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:36.059 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:36.059 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:36.059 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:36.059 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:36.059 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:36.059 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:36.059 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:36.059 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:36.323 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:36.323 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:36.323 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:36.323 [154/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:36.323 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:36.323 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:36.323 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:36.323 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:36.323 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:36.323 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:36.323 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:36.581 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:36.581 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:36.581 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:36.581 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:36.581 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:36.581 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:36.581 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:36.581 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:36.581 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:36.581 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:36.581 [172/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:36.838 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:36.838 [174/203] Linking static target lib/libxnvme.a 00:03:36.838 [175/203] Linking target tests/xnvme_tests_buf 00:03:36.838 [176/203] Linking target tests/xnvme_tests_scc 00:03:36.838 [177/203] Linking target tests/xnvme_tests_enum 00:03:36.838 [178/203] Linking target tests/xnvme_tests_cli 00:03:36.838 [179/203] Linking target tests/xnvme_tests_async_intf 00:03:36.838 [180/203] Linking target tests/xnvme_tests_znd_append 00:03:36.838 [181/203] Linking target tests/xnvme_tests_ioworker 00:03:36.838 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:36.839 [183/203] Linking target tests/xnvme_tests_lblk 00:03:36.839 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:03:36.839 [185/203] Linking target tests/xnvme_tests_znd_state 00:03:36.839 [186/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:36.839 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:36.839 [188/203] Linking target tests/xnvme_tests_kvs 00:03:36.839 [189/203] Linking target tests/xnvme_tests_map 00:03:36.839 [190/203] Linking target tools/xnvme_file 00:03:36.839 [191/203] Linking target tools/lblk 00:03:37.095 [192/203] Linking target examples/xnvme_hello 00:03:37.095 [193/203] Linking target tools/xdd 00:03:37.095 [194/203] Linking target examples/xnvme_enum 00:03:37.095 [195/203] Linking target tools/xnvme 00:03:37.095 [196/203] Linking target examples/xnvme_dev 00:03:37.095 [197/203] Linking target tools/zoned 00:03:37.095 [198/203] Linking target tools/kvs 00:03:37.095 [199/203] Linking target examples/xnvme_single_async 00:03:37.095 [200/203] Linking target examples/zoned_io_sync 00:03:37.095 [201/203] Linking target examples/xnvme_io_async 00:03:37.095 [202/203] Linking target examples/xnvme_single_sync 00:03:37.095 [203/203] Linking target examples/zoned_io_async 00:03:37.095 INFO: autodetecting backend as ninja 00:03:37.095 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.095 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:59.005 CC lib/ut/ut.o 00:03:59.005 CC lib/ut_mock/mock.o 00:03:59.005 CC lib/log/log_deprecated.o 00:03:59.005 CC lib/log/log.o 00:03:59.005 CC lib/log/log_flags.o 00:03:59.005 LIB libspdk_log.a 00:03:59.005 LIB libspdk_ut_mock.a 00:03:59.005 LIB libspdk_ut.a 00:03:59.005 SO libspdk_ut_mock.so.6.0 00:03:59.005 SO libspdk_ut.so.2.0 00:03:59.005 SO libspdk_log.so.7.0 00:03:59.005 SYMLINK libspdk_ut_mock.so 00:03:59.005 SYMLINK libspdk_ut.so 00:03:59.005 SYMLINK libspdk_log.so 00:03:59.005 CC lib/ioat/ioat.o 00:03:59.005 CC lib/util/base64.o 00:03:59.005 CC lib/util/bit_array.o 00:03:59.005 CC lib/util/cpuset.o 00:03:59.005 CC lib/util/crc16.o 00:03:59.005 CC lib/dma/dma.o 00:03:59.005 CC lib/util/crc32.o 00:03:59.005 CC lib/util/crc32c.o 00:03:59.005 CXX lib/trace_parser/trace.o 00:03:59.005 CC lib/util/crc32_ieee.o 00:03:59.005 CC lib/vfio_user/host/vfio_user_pci.o 00:03:59.005 CC lib/vfio_user/host/vfio_user.o 00:03:59.005 CC lib/util/crc64.o 00:03:59.005 CC lib/util/dif.o 00:03:59.005 LIB libspdk_dma.a 00:03:59.005 CC lib/util/fd.o 00:03:59.005 CC lib/util/file.o 00:03:59.005 SO libspdk_dma.so.4.0 00:03:59.005 CC lib/util/hexlify.o 00:03:59.005 CC lib/util/iov.o 00:03:59.005 SYMLINK libspdk_dma.so 00:03:59.005 CC lib/util/math.o 00:03:59.005 LIB libspdk_ioat.a 00:03:59.005 SO libspdk_ioat.so.7.0 00:03:59.005 CC lib/util/pipe.o 00:03:59.005 CC lib/util/strerror_tls.o 00:03:59.005 CC lib/util/string.o 00:03:59.005 SYMLINK libspdk_ioat.so 00:03:59.005 CC lib/util/uuid.o 00:03:59.005 CC lib/util/fd_group.o 00:03:59.005 LIB libspdk_vfio_user.a 00:03:59.005 CC lib/util/xor.o 00:03:59.005 SO libspdk_vfio_user.so.5.0 00:03:59.005 CC lib/util/zipf.o 00:03:59.005 SYMLINK libspdk_vfio_user.so 00:03:59.005 LIB libspdk_util.a 00:03:59.005 SO libspdk_util.so.9.1 00:03:59.005 LIB libspdk_trace_parser.a 00:03:59.005 SYMLINK libspdk_util.so 00:03:59.005 SO libspdk_trace_parser.so.5.0 00:03:59.264 SYMLINK libspdk_trace_parser.so 00:03:59.264 CC lib/conf/conf.o 00:03:59.264 CC lib/rdma_utils/rdma_utils.o 00:03:59.264 CC lib/idxd/idxd.o 00:03:59.264 CC lib/vmd/vmd.o 00:03:59.264 CC lib/vmd/led.o 00:03:59.264 CC lib/env_dpdk/env.o 00:03:59.264 CC lib/idxd/idxd_user.o 00:03:59.264 CC lib/json/json_parse.o 00:03:59.264 CC lib/env_dpdk/memory.o 00:03:59.264 CC lib/rdma_provider/common.o 00:03:59.522 CC lib/env_dpdk/pci.o 00:03:59.522 LIB libspdk_conf.a 00:03:59.522 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:59.522 SO libspdk_conf.so.6.0 00:03:59.522 LIB libspdk_rdma_utils.a 00:03:59.522 CC lib/idxd/idxd_kernel.o 00:03:59.522 CC lib/json/json_util.o 00:03:59.522 SYMLINK libspdk_conf.so 00:03:59.522 SO libspdk_rdma_utils.so.1.0 00:03:59.522 CC lib/json/json_write.o 00:03:59.781 SYMLINK libspdk_rdma_utils.so 00:03:59.781 CC lib/env_dpdk/init.o 00:03:59.781 CC lib/env_dpdk/threads.o 00:03:59.781 LIB libspdk_rdma_provider.a 00:03:59.781 SO libspdk_rdma_provider.so.6.0 00:03:59.781 CC lib/env_dpdk/pci_ioat.o 00:04:00.039 CC lib/env_dpdk/pci_virtio.o 00:04:00.039 SYMLINK libspdk_rdma_provider.so 00:04:00.039 CC lib/env_dpdk/pci_vmd.o 00:04:00.039 CC lib/env_dpdk/pci_idxd.o 00:04:00.039 LIB libspdk_json.a 00:04:00.039 SO libspdk_json.so.6.0 00:04:00.039 CC lib/env_dpdk/pci_event.o 00:04:00.039 CC lib/env_dpdk/sigbus_handler.o 00:04:00.039 SYMLINK libspdk_json.so 00:04:00.039 CC lib/env_dpdk/pci_dpdk.o 00:04:00.039 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.039 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.297 LIB libspdk_idxd.a 00:04:00.297 SO libspdk_idxd.so.12.0 00:04:00.297 LIB libspdk_vmd.a 00:04:00.297 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.297 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.297 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.297 SYMLINK libspdk_idxd.so 00:04:00.297 SO libspdk_vmd.so.6.0 00:04:00.297 SYMLINK libspdk_vmd.so 00:04:00.556 LIB libspdk_jsonrpc.a 00:04:00.556 SO libspdk_jsonrpc.so.6.0 00:04:00.814 SYMLINK libspdk_jsonrpc.so 00:04:01.072 CC lib/rpc/rpc.o 00:04:01.330 LIB libspdk_env_dpdk.a 00:04:01.330 LIB libspdk_rpc.a 00:04:01.330 SO libspdk_rpc.so.6.0 00:04:01.330 SO libspdk_env_dpdk.so.14.1 00:04:01.330 SYMLINK libspdk_rpc.so 00:04:01.588 SYMLINK libspdk_env_dpdk.so 00:04:01.588 CC lib/keyring/keyring.o 00:04:01.588 CC lib/keyring/keyring_rpc.o 00:04:01.588 CC lib/notify/notify.o 00:04:01.588 CC lib/notify/notify_rpc.o 00:04:01.588 CC lib/trace/trace.o 00:04:01.588 CC lib/trace/trace_flags.o 00:04:01.588 CC lib/trace/trace_rpc.o 00:04:01.846 LIB libspdk_notify.a 00:04:01.846 SO libspdk_notify.so.6.0 00:04:01.846 LIB libspdk_trace.a 00:04:01.846 LIB libspdk_keyring.a 00:04:02.105 SYMLINK libspdk_notify.so 00:04:02.105 SO libspdk_keyring.so.1.0 00:04:02.105 SO libspdk_trace.so.10.0 00:04:02.105 SYMLINK libspdk_keyring.so 00:04:02.105 SYMLINK libspdk_trace.so 00:04:02.363 CC lib/sock/sock.o 00:04:02.363 CC lib/thread/iobuf.o 00:04:02.363 CC lib/thread/thread.o 00:04:02.363 CC lib/sock/sock_rpc.o 00:04:02.928 LIB libspdk_sock.a 00:04:02.928 SO libspdk_sock.so.10.0 00:04:03.186 SYMLINK libspdk_sock.so 00:04:03.445 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:03.445 CC lib/nvme/nvme_fabric.o 00:04:03.445 CC lib/nvme/nvme_ctrlr.o 00:04:03.445 CC lib/nvme/nvme_ns_cmd.o 00:04:03.445 CC lib/nvme/nvme_ns.o 00:04:03.445 CC lib/nvme/nvme_pcie_common.o 00:04:03.445 CC lib/nvme/nvme_pcie.o 00:04:03.445 CC lib/nvme/nvme.o 00:04:03.445 CC lib/nvme/nvme_qpair.o 00:04:04.379 CC lib/nvme/nvme_quirks.o 00:04:04.379 CC lib/nvme/nvme_transport.o 00:04:04.379 CC lib/nvme/nvme_discovery.o 00:04:04.379 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:04.379 LIB libspdk_thread.a 00:04:04.379 SO libspdk_thread.so.10.1 00:04:04.379 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:04.638 SYMLINK libspdk_thread.so 00:04:04.638 CC lib/nvme/nvme_tcp.o 00:04:04.638 CC lib/nvme/nvme_opal.o 00:04:04.638 CC lib/nvme/nvme_io_msg.o 00:04:04.638 CC lib/nvme/nvme_poll_group.o 00:04:04.896 CC lib/nvme/nvme_zns.o 00:04:04.896 CC lib/nvme/nvme_stubs.o 00:04:05.154 CC lib/nvme/nvme_auth.o 00:04:05.154 CC lib/nvme/nvme_cuse.o 00:04:05.154 CC lib/accel/accel.o 00:04:05.412 CC lib/accel/accel_rpc.o 00:04:05.412 CC lib/blob/blobstore.o 00:04:05.412 CC lib/init/json_config.o 00:04:05.412 CC lib/accel/accel_sw.o 00:04:05.412 CC lib/nvme/nvme_rdma.o 00:04:05.670 CC lib/init/subsystem.o 00:04:05.670 CC lib/init/subsystem_rpc.o 00:04:05.927 CC lib/blob/request.o 00:04:05.927 CC lib/blob/zeroes.o 00:04:05.927 CC lib/init/rpc.o 00:04:06.264 CC lib/blob/blob_bs_dev.o 00:04:06.264 LIB libspdk_init.a 00:04:06.264 SO libspdk_init.so.5.0 00:04:06.264 SYMLINK libspdk_init.so 00:04:06.523 CC lib/virtio/virtio.o 00:04:06.523 CC lib/virtio/virtio_vhost_user.o 00:04:06.523 CC lib/virtio/virtio_vfio_user.o 00:04:06.523 CC lib/virtio/virtio_pci.o 00:04:06.523 CC lib/event/app.o 00:04:06.523 CC lib/event/reactor.o 00:04:06.523 CC lib/event/log_rpc.o 00:04:06.780 LIB libspdk_accel.a 00:04:06.780 CC lib/event/app_rpc.o 00:04:06.780 CC lib/event/scheduler_static.o 00:04:06.780 SO libspdk_accel.so.15.1 00:04:06.780 LIB libspdk_virtio.a 00:04:06.780 SYMLINK libspdk_accel.so 00:04:07.037 SO libspdk_virtio.so.7.0 00:04:07.037 SYMLINK libspdk_virtio.so 00:04:07.037 LIB libspdk_event.a 00:04:07.037 SO libspdk_event.so.14.0 00:04:07.037 CC lib/bdev/bdev.o 00:04:07.037 CC lib/bdev/bdev_rpc.o 00:04:07.037 CC lib/bdev/bdev_zone.o 00:04:07.037 CC lib/bdev/part.o 00:04:07.037 CC lib/bdev/scsi_nvme.o 00:04:07.294 SYMLINK libspdk_event.so 00:04:07.294 LIB libspdk_nvme.a 00:04:07.551 SO libspdk_nvme.so.13.1 00:04:07.808 SYMLINK libspdk_nvme.so 00:04:09.706 LIB libspdk_blob.a 00:04:09.964 SO libspdk_blob.so.11.0 00:04:09.964 SYMLINK libspdk_blob.so 00:04:10.222 CC lib/blobfs/tree.o 00:04:10.222 CC lib/blobfs/blobfs.o 00:04:10.222 CC lib/lvol/lvol.o 00:04:10.788 LIB libspdk_bdev.a 00:04:10.788 SO libspdk_bdev.so.15.1 00:04:11.046 SYMLINK libspdk_bdev.so 00:04:11.046 CC lib/ublk/ublk_rpc.o 00:04:11.046 CC lib/ublk/ublk.o 00:04:11.046 CC lib/nvmf/ctrlr.o 00:04:11.046 CC lib/nvmf/ctrlr_bdev.o 00:04:11.046 CC lib/nvmf/ctrlr_discovery.o 00:04:11.046 CC lib/nbd/nbd.o 00:04:11.046 CC lib/ftl/ftl_core.o 00:04:11.304 CC lib/scsi/dev.o 00:04:11.304 LIB libspdk_blobfs.a 00:04:11.304 CC lib/ftl/ftl_init.o 00:04:11.304 SO libspdk_blobfs.so.10.0 00:04:11.562 SYMLINK libspdk_blobfs.so 00:04:11.562 CC lib/ftl/ftl_layout.o 00:04:11.562 CC lib/scsi/lun.o 00:04:11.562 LIB libspdk_lvol.a 00:04:11.562 SO libspdk_lvol.so.10.0 00:04:11.562 SYMLINK libspdk_lvol.so 00:04:11.562 CC lib/ftl/ftl_debug.o 00:04:11.820 CC lib/ftl/ftl_io.o 00:04:11.820 CC lib/ftl/ftl_sb.o 00:04:11.820 CC lib/nbd/nbd_rpc.o 00:04:11.820 CC lib/nvmf/subsystem.o 00:04:11.820 CC lib/ftl/ftl_l2p.o 00:04:11.820 CC lib/scsi/port.o 00:04:11.820 CC lib/ftl/ftl_l2p_flat.o 00:04:12.077 LIB libspdk_nbd.a 00:04:12.077 CC lib/ftl/ftl_nv_cache.o 00:04:12.077 SO libspdk_nbd.so.7.0 00:04:12.077 LIB libspdk_ublk.a 00:04:12.077 SO libspdk_ublk.so.3.0 00:04:12.077 CC lib/nvmf/nvmf.o 00:04:12.077 SYMLINK libspdk_nbd.so 00:04:12.077 CC lib/scsi/scsi.o 00:04:12.077 CC lib/scsi/scsi_bdev.o 00:04:12.077 CC lib/scsi/scsi_pr.o 00:04:12.077 CC lib/nvmf/nvmf_rpc.o 00:04:12.077 SYMLINK libspdk_ublk.so 00:04:12.077 CC lib/nvmf/transport.o 00:04:12.077 CC lib/nvmf/tcp.o 00:04:12.333 CC lib/scsi/scsi_rpc.o 00:04:12.333 CC lib/scsi/task.o 00:04:12.590 CC lib/ftl/ftl_band.o 00:04:12.590 CC lib/nvmf/stubs.o 00:04:12.847 LIB libspdk_scsi.a 00:04:12.847 SO libspdk_scsi.so.9.0 00:04:13.104 CC lib/ftl/ftl_band_ops.o 00:04:13.104 SYMLINK libspdk_scsi.so 00:04:13.104 CC lib/nvmf/mdns_server.o 00:04:13.104 CC lib/ftl/ftl_writer.o 00:04:13.104 CC lib/ftl/ftl_rq.o 00:04:13.361 CC lib/ftl/ftl_reloc.o 00:04:13.361 CC lib/ftl/ftl_l2p_cache.o 00:04:13.361 CC lib/ftl/ftl_p2l.o 00:04:13.361 CC lib/ftl/mngt/ftl_mngt.o 00:04:13.361 CC lib/nvmf/rdma.o 00:04:13.361 CC lib/nvmf/auth.o 00:04:13.361 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:13.619 CC lib/iscsi/conn.o 00:04:13.619 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:13.619 CC lib/iscsi/init_grp.o 00:04:13.619 CC lib/vhost/vhost.o 00:04:13.876 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:13.876 CC lib/iscsi/iscsi.o 00:04:13.876 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:13.876 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:13.876 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:14.134 CC lib/iscsi/md5.o 00:04:14.134 CC lib/iscsi/param.o 00:04:14.134 CC lib/iscsi/portal_grp.o 00:04:14.425 CC lib/iscsi/tgt_node.o 00:04:14.425 CC lib/iscsi/iscsi_subsystem.o 00:04:14.425 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.425 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:14.425 CC lib/iscsi/iscsi_rpc.o 00:04:14.425 CC lib/vhost/vhost_rpc.o 00:04:14.682 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.682 CC lib/iscsi/task.o 00:04:14.682 CC lib/vhost/vhost_scsi.o 00:04:14.682 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.939 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.939 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.939 CC lib/ftl/utils/ftl_conf.o 00:04:14.939 CC lib/ftl/utils/ftl_md.o 00:04:14.939 CC lib/ftl/utils/ftl_mempool.o 00:04:15.197 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.197 CC lib/ftl/utils/ftl_property.o 00:04:15.197 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.197 CC lib/vhost/vhost_blk.o 00:04:15.197 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.453 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:15.453 CC lib/vhost/rte_vhost_user.o 00:04:15.709 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.710 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.710 LIB libspdk_iscsi.a 00:04:15.710 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.710 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.710 CC lib/ftl/base/ftl_base_dev.o 00:04:15.710 SO libspdk_iscsi.so.8.0 00:04:15.710 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.967 CC lib/ftl/ftl_trace.o 00:04:16.223 SYMLINK libspdk_iscsi.so 00:04:16.223 LIB libspdk_ftl.a 00:04:16.479 LIB libspdk_nvmf.a 00:04:16.480 SO libspdk_ftl.so.9.0 00:04:16.736 SO libspdk_nvmf.so.19.0 00:04:16.736 LIB libspdk_vhost.a 00:04:17.012 SO libspdk_vhost.so.8.0 00:04:17.012 SYMLINK libspdk_nvmf.so 00:04:17.012 SYMLINK libspdk_ftl.so 00:04:17.012 SYMLINK libspdk_vhost.so 00:04:17.574 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.574 CC module/accel/dsa/accel_dsa.o 00:04:17.574 CC module/accel/error/accel_error.o 00:04:17.575 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.575 CC module/keyring/file/keyring.o 00:04:17.575 CC module/sock/posix/posix.o 00:04:17.575 CC module/accel/ioat/accel_ioat.o 00:04:17.575 CC module/accel/iaa/accel_iaa.o 00:04:17.575 CC module/blob/bdev/blob_bdev.o 00:04:17.575 CC module/keyring/linux/keyring.o 00:04:17.575 LIB libspdk_env_dpdk_rpc.a 00:04:17.575 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.575 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.831 CC module/keyring/linux/keyring_rpc.o 00:04:17.831 CC module/accel/error/accel_error_rpc.o 00:04:17.831 CC module/keyring/file/keyring_rpc.o 00:04:17.831 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.831 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.831 LIB libspdk_keyring_linux.a 00:04:17.831 LIB libspdk_accel_error.a 00:04:17.831 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.831 LIB libspdk_scheduler_dynamic.a 00:04:17.831 SO libspdk_accel_error.so.2.0 00:04:17.831 SO libspdk_keyring_linux.so.1.0 00:04:18.088 SO libspdk_scheduler_dynamic.so.4.0 00:04:18.088 LIB libspdk_accel_iaa.a 00:04:18.088 LIB libspdk_keyring_file.a 00:04:18.088 SYMLINK libspdk_accel_error.so 00:04:18.088 SO libspdk_accel_iaa.so.3.0 00:04:18.088 SYMLINK libspdk_scheduler_dynamic.so 00:04:18.088 SO libspdk_keyring_file.so.1.0 00:04:18.088 SYMLINK libspdk_keyring_linux.so 00:04:18.088 LIB libspdk_accel_dsa.a 00:04:18.088 LIB libspdk_accel_ioat.a 00:04:18.088 LIB libspdk_blob_bdev.a 00:04:18.088 SYMLINK libspdk_accel_iaa.so 00:04:18.088 SO libspdk_accel_ioat.so.6.0 00:04:18.088 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.088 SO libspdk_blob_bdev.so.11.0 00:04:18.088 SO libspdk_accel_dsa.so.5.0 00:04:18.088 SYMLINK libspdk_keyring_file.so 00:04:18.345 SYMLINK libspdk_blob_bdev.so 00:04:18.345 SYMLINK libspdk_accel_ioat.so 00:04:18.345 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.345 SYMLINK libspdk_accel_dsa.so 00:04:18.345 LIB libspdk_scheduler_dpdk_governor.a 00:04:18.345 LIB libspdk_scheduler_gscheduler.a 00:04:18.345 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:18.602 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.602 LIB libspdk_sock_posix.a 00:04:18.602 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:18.602 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.602 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.602 CC module/bdev/gpt/gpt.o 00:04:18.602 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.602 CC module/bdev/error/vbdev_error.o 00:04:18.602 CC module/bdev/delay/vbdev_delay.o 00:04:18.602 CC module/bdev/null/bdev_null.o 00:04:18.602 SO libspdk_sock_posix.so.6.0 00:04:18.602 CC module/bdev/malloc/bdev_malloc.o 00:04:18.602 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.602 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.602 SYMLINK libspdk_sock_posix.so 00:04:18.602 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.859 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.859 LIB libspdk_blobfs_bdev.a 00:04:18.859 SO libspdk_blobfs_bdev.so.6.0 00:04:18.859 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.859 SYMLINK libspdk_blobfs_bdev.so 00:04:19.116 LIB libspdk_bdev_delay.a 00:04:19.116 SO libspdk_bdev_delay.so.6.0 00:04:19.116 LIB libspdk_bdev_malloc.a 00:04:19.116 CC module/bdev/null/bdev_null_rpc.o 00:04:19.116 SO libspdk_bdev_malloc.so.6.0 00:04:19.116 LIB libspdk_bdev_error.a 00:04:19.116 CC module/bdev/passthru/vbdev_passthru.o 00:04:19.116 LIB libspdk_bdev_gpt.a 00:04:19.116 CC module/bdev/nvme/bdev_nvme.o 00:04:19.116 SO libspdk_bdev_error.so.6.0 00:04:19.116 CC module/bdev/split/vbdev_split.o 00:04:19.116 CC module/bdev/raid/bdev_raid.o 00:04:19.116 SYMLINK libspdk_bdev_delay.so 00:04:19.116 SO libspdk_bdev_gpt.so.6.0 00:04:19.116 CC module/bdev/raid/bdev_raid_rpc.o 00:04:19.116 SYMLINK libspdk_bdev_malloc.so 00:04:19.116 CC module/bdev/split/vbdev_split_rpc.o 00:04:19.373 SYMLINK libspdk_bdev_error.so 00:04:19.373 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:19.373 SYMLINK libspdk_bdev_gpt.so 00:04:19.373 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.373 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:19.373 LIB libspdk_bdev_null.a 00:04:19.373 LIB libspdk_bdev_split.a 00:04:19.373 SO libspdk_bdev_null.so.6.0 00:04:19.373 SO libspdk_bdev_split.so.6.0 00:04:19.631 LIB libspdk_bdev_passthru.a 00:04:19.631 SYMLINK libspdk_bdev_null.so 00:04:19.631 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:19.631 SYMLINK libspdk_bdev_split.so 00:04:19.631 CC module/bdev/nvme/nvme_rpc.o 00:04:19.631 SO libspdk_bdev_passthru.so.6.0 00:04:19.631 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:19.631 CC module/bdev/xnvme/bdev_xnvme.o 00:04:19.631 SYMLINK libspdk_bdev_passthru.so 00:04:19.631 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.888 CC module/bdev/raid/raid0.o 00:04:19.888 LIB libspdk_bdev_lvol.a 00:04:19.888 CC module/bdev/aio/bdev_aio.o 00:04:19.888 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.888 SO libspdk_bdev_lvol.so.6.0 00:04:19.888 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.888 SYMLINK libspdk_bdev_lvol.so 00:04:19.888 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:20.145 LIB libspdk_bdev_zone_block.a 00:04:20.145 CC module/bdev/nvme/vbdev_opal.o 00:04:20.145 SO libspdk_bdev_zone_block.so.6.0 00:04:20.145 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.145 CC module/bdev/ftl/bdev_ftl.o 00:04:20.145 SYMLINK libspdk_bdev_zone_block.so 00:04:20.145 LIB libspdk_bdev_xnvme.a 00:04:20.402 SO libspdk_bdev_xnvme.so.3.0 00:04:20.402 CC module/bdev/iscsi/bdev_iscsi.o 00:04:20.402 LIB libspdk_bdev_aio.a 00:04:20.402 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:20.402 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.402 SYMLINK libspdk_bdev_xnvme.so 00:04:20.402 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.402 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.402 SO libspdk_bdev_aio.so.6.0 00:04:20.402 CC module/bdev/raid/raid1.o 00:04:20.402 SYMLINK libspdk_bdev_aio.so 00:04:20.402 CC module/bdev/raid/concat.o 00:04:20.660 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:20.660 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:20.660 LIB libspdk_bdev_ftl.a 00:04:20.660 SO libspdk_bdev_ftl.so.6.0 00:04:20.660 LIB libspdk_bdev_raid.a 00:04:20.660 SYMLINK libspdk_bdev_ftl.so 00:04:20.918 LIB libspdk_bdev_iscsi.a 00:04:20.918 SO libspdk_bdev_raid.so.6.0 00:04:20.918 SO libspdk_bdev_iscsi.so.6.0 00:04:20.918 SYMLINK libspdk_bdev_iscsi.so 00:04:20.918 SYMLINK libspdk_bdev_raid.so 00:04:21.176 LIB libspdk_bdev_virtio.a 00:04:21.176 SO libspdk_bdev_virtio.so.6.0 00:04:21.176 SYMLINK libspdk_bdev_virtio.so 00:04:22.109 LIB libspdk_bdev_nvme.a 00:04:22.109 SO libspdk_bdev_nvme.so.7.0 00:04:22.367 SYMLINK libspdk_bdev_nvme.so 00:04:22.951 CC module/event/subsystems/iobuf/iobuf.o 00:04:22.951 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:22.951 CC module/event/subsystems/keyring/keyring.o 00:04:22.951 CC module/event/subsystems/sock/sock.o 00:04:22.951 CC module/event/subsystems/vmd/vmd.o 00:04:22.951 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:22.951 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:22.951 CC module/event/subsystems/scheduler/scheduler.o 00:04:22.951 LIB libspdk_event_keyring.a 00:04:22.951 LIB libspdk_event_scheduler.a 00:04:22.951 LIB libspdk_event_sock.a 00:04:22.951 LIB libspdk_event_iobuf.a 00:04:23.209 LIB libspdk_event_vhost_blk.a 00:04:23.209 SO libspdk_event_scheduler.so.4.0 00:04:23.209 SO libspdk_event_keyring.so.1.0 00:04:23.209 SO libspdk_event_sock.so.5.0 00:04:23.209 LIB libspdk_event_vmd.a 00:04:23.209 SO libspdk_event_vhost_blk.so.3.0 00:04:23.209 SO libspdk_event_iobuf.so.3.0 00:04:23.209 SYMLINK libspdk_event_keyring.so 00:04:23.209 SYMLINK libspdk_event_scheduler.so 00:04:23.209 SO libspdk_event_vmd.so.6.0 00:04:23.209 SYMLINK libspdk_event_sock.so 00:04:23.209 SYMLINK libspdk_event_vhost_blk.so 00:04:23.209 SYMLINK libspdk_event_iobuf.so 00:04:23.209 SYMLINK libspdk_event_vmd.so 00:04:23.468 CC module/event/subsystems/accel/accel.o 00:04:23.727 LIB libspdk_event_accel.a 00:04:23.727 SO libspdk_event_accel.so.6.0 00:04:23.727 SYMLINK libspdk_event_accel.so 00:04:23.985 CC module/event/subsystems/bdev/bdev.o 00:04:24.243 LIB libspdk_event_bdev.a 00:04:24.243 SO libspdk_event_bdev.so.6.0 00:04:24.243 SYMLINK libspdk_event_bdev.so 00:04:24.501 CC module/event/subsystems/ublk/ublk.o 00:04:24.501 CC module/event/subsystems/scsi/scsi.o 00:04:24.501 CC module/event/subsystems/nbd/nbd.o 00:04:24.501 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:24.501 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:24.759 LIB libspdk_event_scsi.a 00:04:24.759 LIB libspdk_event_nbd.a 00:04:24.759 LIB libspdk_event_ublk.a 00:04:24.759 SO libspdk_event_scsi.so.6.0 00:04:24.759 SO libspdk_event_nbd.so.6.0 00:04:24.759 SO libspdk_event_ublk.so.3.0 00:04:24.759 SYMLINK libspdk_event_nbd.so 00:04:24.759 SYMLINK libspdk_event_scsi.so 00:04:24.759 SYMLINK libspdk_event_ublk.so 00:04:24.759 LIB libspdk_event_nvmf.a 00:04:25.017 SO libspdk_event_nvmf.so.6.0 00:04:25.017 SYMLINK libspdk_event_nvmf.so 00:04:25.017 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.017 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.275 LIB libspdk_event_vhost_scsi.a 00:04:25.275 LIB libspdk_event_iscsi.a 00:04:25.275 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.275 SO libspdk_event_iscsi.so.6.0 00:04:25.534 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.534 SYMLINK libspdk_event_iscsi.so 00:04:25.534 SO libspdk.so.6.0 00:04:25.534 SYMLINK libspdk.so 00:04:25.792 CC app/spdk_nvme_perf/perf.o 00:04:25.792 CXX app/trace/trace.o 00:04:25.792 CC app/spdk_nvme_identify/identify.o 00:04:25.792 CC app/trace_record/trace_record.o 00:04:25.792 CC app/spdk_lspci/spdk_lspci.o 00:04:25.792 CC app/nvmf_tgt/nvmf_main.o 00:04:25.792 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.051 CC app/spdk_tgt/spdk_tgt.o 00:04:26.051 CC test/thread/poller_perf/poller_perf.o 00:04:26.051 CC examples/util/zipf/zipf.o 00:04:26.051 LINK spdk_lspci 00:04:26.051 LINK spdk_trace_record 00:04:26.051 LINK nvmf_tgt 00:04:26.051 LINK poller_perf 00:04:26.051 LINK zipf 00:04:26.051 LINK spdk_tgt 00:04:26.051 LINK iscsi_tgt 00:04:26.309 CC app/spdk_nvme_discover/discovery_aer.o 00:04:26.309 CC app/spdk_top/spdk_top.o 00:04:26.309 LINK spdk_trace 00:04:26.567 CC examples/ioat/perf/perf.o 00:04:26.567 CC app/spdk_dd/spdk_dd.o 00:04:26.567 LINK spdk_nvme_discover 00:04:26.567 CC test/dma/test_dma/test_dma.o 00:04:26.567 CC app/fio/nvme/fio_plugin.o 00:04:26.567 CC test/app/bdev_svc/bdev_svc.o 00:04:26.825 CC app/fio/bdev/fio_plugin.o 00:04:26.825 LINK ioat_perf 00:04:26.825 LINK bdev_svc 00:04:26.825 LINK spdk_nvme_identify 00:04:26.825 CC app/vhost/vhost.o 00:04:26.825 LINK spdk_nvme_perf 00:04:27.083 LINK spdk_dd 00:04:27.083 CC examples/ioat/verify/verify.o 00:04:27.083 LINK test_dma 00:04:27.083 LINK vhost 00:04:27.342 TEST_HEADER include/spdk/accel.h 00:04:27.342 TEST_HEADER include/spdk/accel_module.h 00:04:27.342 TEST_HEADER include/spdk/assert.h 00:04:27.342 TEST_HEADER include/spdk/barrier.h 00:04:27.342 TEST_HEADER include/spdk/base64.h 00:04:27.342 TEST_HEADER include/spdk/bdev.h 00:04:27.342 TEST_HEADER include/spdk/bdev_module.h 00:04:27.342 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.342 TEST_HEADER include/spdk/bit_array.h 00:04:27.342 TEST_HEADER include/spdk/bit_pool.h 00:04:27.342 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.342 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.342 TEST_HEADER include/spdk/blobfs.h 00:04:27.342 TEST_HEADER include/spdk/blob.h 00:04:27.342 TEST_HEADER include/spdk/conf.h 00:04:27.342 TEST_HEADER include/spdk/config.h 00:04:27.342 TEST_HEADER include/spdk/cpuset.h 00:04:27.342 TEST_HEADER include/spdk/crc16.h 00:04:27.342 TEST_HEADER include/spdk/crc32.h 00:04:27.342 TEST_HEADER include/spdk/crc64.h 00:04:27.342 TEST_HEADER include/spdk/dif.h 00:04:27.342 TEST_HEADER include/spdk/dma.h 00:04:27.342 TEST_HEADER include/spdk/endian.h 00:04:27.342 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.342 TEST_HEADER include/spdk/env.h 00:04:27.342 TEST_HEADER include/spdk/event.h 00:04:27.342 TEST_HEADER include/spdk/fd_group.h 00:04:27.342 TEST_HEADER include/spdk/fd.h 00:04:27.342 TEST_HEADER include/spdk/file.h 00:04:27.342 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:27.342 TEST_HEADER include/spdk/ftl.h 00:04:27.342 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.342 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.342 TEST_HEADER include/spdk/hexlify.h 00:04:27.342 TEST_HEADER include/spdk/histogram_data.h 00:04:27.342 TEST_HEADER include/spdk/idxd.h 00:04:27.342 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.342 TEST_HEADER include/spdk/init.h 00:04:27.342 TEST_HEADER include/spdk/ioat.h 00:04:27.343 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.343 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.343 LINK spdk_nvme 00:04:27.343 TEST_HEADER include/spdk/json.h 00:04:27.343 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.343 TEST_HEADER include/spdk/keyring.h 00:04:27.343 LINK verify 00:04:27.343 TEST_HEADER include/spdk/keyring_module.h 00:04:27.343 TEST_HEADER include/spdk/likely.h 00:04:27.343 TEST_HEADER include/spdk/log.h 00:04:27.343 TEST_HEADER include/spdk/lvol.h 00:04:27.343 TEST_HEADER include/spdk/memory.h 00:04:27.343 TEST_HEADER include/spdk/mmio.h 00:04:27.343 TEST_HEADER include/spdk/nbd.h 00:04:27.343 TEST_HEADER include/spdk/notify.h 00:04:27.343 TEST_HEADER include/spdk/nvme.h 00:04:27.343 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.343 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.343 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.343 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.343 LINK spdk_bdev 00:04:27.343 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.343 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.343 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.343 TEST_HEADER include/spdk/nvmf.h 00:04:27.343 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.343 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.343 TEST_HEADER include/spdk/opal.h 00:04:27.343 TEST_HEADER include/spdk/opal_spec.h 00:04:27.343 TEST_HEADER include/spdk/pci_ids.h 00:04:27.343 TEST_HEADER include/spdk/pipe.h 00:04:27.343 TEST_HEADER include/spdk/queue.h 00:04:27.343 TEST_HEADER include/spdk/reduce.h 00:04:27.343 TEST_HEADER include/spdk/rpc.h 00:04:27.343 TEST_HEADER include/spdk/scheduler.h 00:04:27.343 TEST_HEADER include/spdk/scsi.h 00:04:27.343 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.343 TEST_HEADER include/spdk/sock.h 00:04:27.343 CC examples/vmd/led/led.o 00:04:27.343 TEST_HEADER include/spdk/stdinc.h 00:04:27.343 TEST_HEADER include/spdk/string.h 00:04:27.343 TEST_HEADER include/spdk/thread.h 00:04:27.343 TEST_HEADER include/spdk/trace.h 00:04:27.343 TEST_HEADER include/spdk/trace_parser.h 00:04:27.343 TEST_HEADER include/spdk/tree.h 00:04:27.343 TEST_HEADER include/spdk/ublk.h 00:04:27.343 TEST_HEADER include/spdk/util.h 00:04:27.343 TEST_HEADER include/spdk/uuid.h 00:04:27.343 TEST_HEADER include/spdk/version.h 00:04:27.343 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.343 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.343 TEST_HEADER include/spdk/vhost.h 00:04:27.343 TEST_HEADER include/spdk/vmd.h 00:04:27.343 TEST_HEADER include/spdk/xor.h 00:04:27.343 TEST_HEADER include/spdk/zipf.h 00:04:27.343 CXX test/cpp_headers/accel.o 00:04:27.343 LINK lsvmd 00:04:27.601 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.601 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.601 LINK spdk_top 00:04:27.601 CC test/event/event_perf/event_perf.o 00:04:27.601 LINK led 00:04:27.601 CC test/event/reactor/reactor.o 00:04:27.601 CC test/event/reactor_perf/reactor_perf.o 00:04:27.601 CXX test/cpp_headers/accel_module.o 00:04:27.601 CXX test/cpp_headers/assert.o 00:04:27.860 LINK event_perf 00:04:27.860 CC test/app/histogram_perf/histogram_perf.o 00:04:27.860 LINK reactor 00:04:27.860 LINK reactor_perf 00:04:27.860 LINK nvme_fuzz 00:04:27.860 CXX test/cpp_headers/barrier.o 00:04:27.860 LINK histogram_perf 00:04:27.860 CC examples/idxd/perf/perf.o 00:04:27.860 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:28.118 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:28.118 CC test/event/app_repeat/app_repeat.o 00:04:28.118 CXX test/cpp_headers/base64.o 00:04:28.118 LINK mem_callbacks 00:04:28.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.118 CC examples/thread/thread/thread_ex.o 00:04:28.118 CC test/env/vtophys/vtophys.o 00:04:28.118 CC examples/sock/hello_world/hello_sock.o 00:04:28.118 LINK app_repeat 00:04:28.118 CXX test/cpp_headers/bdev.o 00:04:28.377 LINK interrupt_tgt 00:04:28.377 LINK vtophys 00:04:28.377 LINK idxd_perf 00:04:28.377 CC test/app/jsoncat/jsoncat.o 00:04:28.377 LINK thread 00:04:28.377 CXX test/cpp_headers/bdev_module.o 00:04:28.636 LINK hello_sock 00:04:28.636 LINK jsoncat 00:04:28.636 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.636 CC test/event/scheduler/scheduler.o 00:04:28.636 CC test/env/memory/memory_ut.o 00:04:28.636 LINK vhost_fuzz 00:04:28.636 CXX test/cpp_headers/bdev_zone.o 00:04:28.636 CC test/nvme/aer/aer.o 00:04:28.636 CC test/nvme/reset/reset.o 00:04:28.636 LINK env_dpdk_post_init 00:04:28.894 LINK scheduler 00:04:28.894 CC examples/accel/perf/accel_perf.o 00:04:28.894 CC examples/blob/hello_world/hello_blob.o 00:04:28.894 CXX test/cpp_headers/bit_array.o 00:04:29.152 LINK reset 00:04:29.152 LINK aer 00:04:29.152 CC examples/nvme/hello_world/hello_world.o 00:04:29.152 CC examples/blob/cli/blobcli.o 00:04:29.152 CXX test/cpp_headers/bit_pool.o 00:04:29.152 CC examples/nvme/reconnect/reconnect.o 00:04:29.152 LINK hello_blob 00:04:29.152 CXX test/cpp_headers/blob_bdev.o 00:04:29.410 CC test/nvme/sgl/sgl.o 00:04:29.410 LINK hello_world 00:04:29.410 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:29.410 CXX test/cpp_headers/blobfs_bdev.o 00:04:29.410 LINK accel_perf 00:04:29.669 CC examples/nvme/arbitration/arbitration.o 00:04:29.669 CXX test/cpp_headers/blobfs.o 00:04:29.669 LINK reconnect 00:04:29.669 LINK sgl 00:04:29.669 LINK iscsi_fuzz 00:04:29.669 LINK blobcli 00:04:29.669 CXX test/cpp_headers/blob.o 00:04:29.669 CC test/nvme/e2edp/nvme_dp.o 00:04:29.669 CC test/nvme/overhead/overhead.o 00:04:29.928 CXX test/cpp_headers/conf.o 00:04:29.928 LINK memory_ut 00:04:29.928 CXX test/cpp_headers/config.o 00:04:29.928 CC test/nvme/err_injection/err_injection.o 00:04:29.928 LINK arbitration 00:04:30.186 CC test/nvme/startup/startup.o 00:04:30.186 CXX test/cpp_headers/cpuset.o 00:04:30.186 CC examples/bdev/hello_world/hello_bdev.o 00:04:30.186 CC test/app/stub/stub.o 00:04:30.186 LINK overhead 00:04:30.186 LINK nvme_manage 00:04:30.186 LINK err_injection 00:04:30.186 LINK nvme_dp 00:04:30.186 CC test/nvme/reserve/reserve.o 00:04:30.186 CC test/env/pci/pci_ut.o 00:04:30.186 CXX test/cpp_headers/crc16.o 00:04:30.186 LINK stub 00:04:30.186 LINK startup 00:04:30.445 LINK hello_bdev 00:04:30.445 CC examples/nvme/hotplug/hotplug.o 00:04:30.445 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.445 CXX test/cpp_headers/crc32.o 00:04:30.445 CXX test/cpp_headers/crc64.o 00:04:30.445 CXX test/cpp_headers/dif.o 00:04:30.445 LINK reserve 00:04:30.445 CC test/rpc_client/rpc_client_test.o 00:04:30.703 CXX test/cpp_headers/dma.o 00:04:30.703 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.703 CXX test/cpp_headers/endian.o 00:04:30.703 CXX test/cpp_headers/env_dpdk.o 00:04:30.703 LINK hotplug 00:04:30.703 LINK rpc_client_test 00:04:30.703 LINK cmb_copy 00:04:30.703 LINK pci_ut 00:04:30.703 CC test/nvme/simple_copy/simple_copy.o 00:04:30.960 CXX test/cpp_headers/env.o 00:04:30.960 CC examples/nvme/abort/abort.o 00:04:30.960 CC test/accel/dif/dif.o 00:04:30.960 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.960 CC test/nvme/connect_stress/connect_stress.o 00:04:30.960 CXX test/cpp_headers/event.o 00:04:30.960 CXX test/cpp_headers/fd_group.o 00:04:31.217 LINK simple_copy 00:04:31.217 LINK pmr_persistence 00:04:31.217 CXX test/cpp_headers/fd.o 00:04:31.217 CC test/blobfs/mkfs/mkfs.o 00:04:31.217 LINK connect_stress 00:04:31.217 CXX test/cpp_headers/file.o 00:04:31.217 LINK abort 00:04:31.475 LINK bdevperf 00:04:31.475 CC test/lvol/esnap/esnap.o 00:04:31.475 CC test/nvme/boot_partition/boot_partition.o 00:04:31.475 CXX test/cpp_headers/ftl.o 00:04:31.475 LINK mkfs 00:04:31.475 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.475 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.475 CC test/nvme/compliance/nvme_compliance.o 00:04:31.475 LINK dif 00:04:31.475 LINK boot_partition 00:04:31.733 CC test/nvme/fdp/fdp.o 00:04:31.733 CXX test/cpp_headers/gpt_spec.o 00:04:31.733 LINK doorbell_aers 00:04:31.733 LINK fused_ordering 00:04:31.733 CC test/nvme/cuse/cuse.o 00:04:31.733 CXX test/cpp_headers/hexlify.o 00:04:31.733 CXX test/cpp_headers/histogram_data.o 00:04:31.991 CXX test/cpp_headers/idxd.o 00:04:31.991 CXX test/cpp_headers/idxd_spec.o 00:04:31.991 CC examples/nvmf/nvmf/nvmf.o 00:04:31.991 CXX test/cpp_headers/init.o 00:04:31.991 LINK nvme_compliance 00:04:31.991 CXX test/cpp_headers/ioat.o 00:04:31.991 CC test/bdev/bdevio/bdevio.o 00:04:31.991 LINK fdp 00:04:31.991 CXX test/cpp_headers/ioat_spec.o 00:04:31.991 CXX test/cpp_headers/iscsi_spec.o 00:04:32.249 CXX test/cpp_headers/json.o 00:04:32.249 CXX test/cpp_headers/jsonrpc.o 00:04:32.249 CXX test/cpp_headers/keyring.o 00:04:32.249 CXX test/cpp_headers/keyring_module.o 00:04:32.249 CXX test/cpp_headers/likely.o 00:04:32.249 CXX test/cpp_headers/log.o 00:04:32.249 LINK nvmf 00:04:32.249 CXX test/cpp_headers/lvol.o 00:04:32.249 CXX test/cpp_headers/memory.o 00:04:32.507 CXX test/cpp_headers/mmio.o 00:04:32.507 CXX test/cpp_headers/nbd.o 00:04:32.507 CXX test/cpp_headers/notify.o 00:04:32.507 CXX test/cpp_headers/nvme.o 00:04:32.507 CXX test/cpp_headers/nvme_intel.o 00:04:32.507 LINK bdevio 00:04:32.507 CXX test/cpp_headers/nvme_ocssd.o 00:04:32.507 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.507 CXX test/cpp_headers/nvme_spec.o 00:04:32.507 CXX test/cpp_headers/nvme_zns.o 00:04:32.507 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.507 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.766 CXX test/cpp_headers/nvmf.o 00:04:32.766 CXX test/cpp_headers/nvmf_spec.o 00:04:32.766 CXX test/cpp_headers/nvmf_transport.o 00:04:32.766 CXX test/cpp_headers/opal.o 00:04:32.766 CXX test/cpp_headers/opal_spec.o 00:04:32.766 CXX test/cpp_headers/pci_ids.o 00:04:32.766 CXX test/cpp_headers/pipe.o 00:04:32.766 CXX test/cpp_headers/queue.o 00:04:32.766 CXX test/cpp_headers/reduce.o 00:04:32.766 CXX test/cpp_headers/rpc.o 00:04:32.766 CXX test/cpp_headers/scheduler.o 00:04:33.024 CXX test/cpp_headers/scsi.o 00:04:33.025 CXX test/cpp_headers/scsi_spec.o 00:04:33.025 CXX test/cpp_headers/sock.o 00:04:33.025 CXX test/cpp_headers/stdinc.o 00:04:33.025 CXX test/cpp_headers/string.o 00:04:33.025 CXX test/cpp_headers/thread.o 00:04:33.025 CXX test/cpp_headers/trace.o 00:04:33.025 CXX test/cpp_headers/trace_parser.o 00:04:33.025 CXX test/cpp_headers/tree.o 00:04:33.025 CXX test/cpp_headers/ublk.o 00:04:33.025 CXX test/cpp_headers/util.o 00:04:33.283 CXX test/cpp_headers/uuid.o 00:04:33.283 CXX test/cpp_headers/version.o 00:04:33.283 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.283 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.283 LINK cuse 00:04:33.283 CXX test/cpp_headers/vhost.o 00:04:33.283 CXX test/cpp_headers/vmd.o 00:04:33.283 CXX test/cpp_headers/xor.o 00:04:33.283 CXX test/cpp_headers/zipf.o 00:04:38.542 LINK esnap 00:04:39.107 00:04:39.107 real 1m9.721s 00:04:39.107 user 6m13.726s 00:04:39.107 sys 1m20.050s 00:04:39.107 17:10:49 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:39.107 ************************************ 00:04:39.107 END TEST make 00:04:39.107 ************************************ 00:04:39.107 17:10:49 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.107 17:10:49 -- common/autotest_common.sh@1142 -- $ return 0 00:04:39.107 17:10:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.107 17:10:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.107 17:10:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.107 17:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.107 17:10:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.107 17:10:49 -- pm/common@44 -- $ pid=6078 00:04:39.107 17:10:49 -- pm/common@50 -- $ kill -TERM 6078 00:04:39.107 17:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.107 17:10:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.107 17:10:49 -- pm/common@44 -- $ pid=6079 00:04:39.107 17:10:49 -- pm/common@50 -- $ kill -TERM 6079 00:04:39.107 17:10:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.107 17:10:49 -- nvmf/common.sh@7 -- # uname -s 00:04:39.107 17:10:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.107 17:10:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.107 17:10:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.107 17:10:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.107 17:10:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.107 17:10:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.107 17:10:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.107 17:10:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.107 17:10:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.107 17:10:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.107 17:10:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bf447e3-99af-44cc-8bbd-f884be8c0416 00:04:39.107 17:10:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=7bf447e3-99af-44cc-8bbd-f884be8c0416 00:04:39.107 17:10:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.107 17:10:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.107 17:10:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.107 17:10:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.107 17:10:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.107 17:10:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.107 17:10:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.107 17:10:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.107 17:10:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.107 17:10:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.107 17:10:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.107 17:10:49 -- paths/export.sh@5 -- # export PATH 00:04:39.107 17:10:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.107 17:10:49 -- nvmf/common.sh@47 -- # : 0 00:04:39.107 17:10:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.107 17:10:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.107 17:10:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.107 17:10:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.107 17:10:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.107 17:10:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.107 17:10:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.107 17:10:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.107 17:10:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.107 17:10:49 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.107 17:10:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.107 17:10:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.107 17:10:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.107 17:10:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.107 17:10:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.107 17:10:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.107 17:10:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.108 17:10:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.108 17:10:49 -- spdk/autotest.sh@48 -- # udevadm_pid=67415 00:04:39.108 17:10:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.108 17:10:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.108 17:10:49 -- pm/common@17 -- # local monitor 00:04:39.108 17:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.108 17:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.108 17:10:49 -- pm/common@25 -- # sleep 1 00:04:39.108 17:10:49 -- pm/common@21 -- # date +%s 00:04:39.108 17:10:49 -- pm/common@21 -- # date +%s 00:04:39.108 17:10:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721063449 00:04:39.108 17:10:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721063449 00:04:39.108 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721063449_collect-vmstat.pm.log 00:04:39.108 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721063449_collect-cpu-load.pm.log 00:04:40.482 17:10:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.482 17:10:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.482 17:10:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.482 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:04:40.482 17:10:50 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.482 17:10:50 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:40.482 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:04:40.482 17:10:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.482 17:10:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.482 17:10:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.482 17:10:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.482 17:10:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.482 17:10:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.482 17:10:50 -- common/autotest_common.sh@1455 -- # uname 00:04:40.482 17:10:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:40.482 17:10:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.482 17:10:50 -- common/autotest_common.sh@1475 -- # uname 00:04:40.482 17:10:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:40.482 17:10:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:40.482 17:10:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:40.482 17:10:50 -- spdk/autotest.sh@72 -- # hash lcov 00:04:40.482 17:10:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:40.482 17:10:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:40.482 --rc lcov_branch_coverage=1 00:04:40.482 --rc lcov_function_coverage=1 00:04:40.482 --rc genhtml_branch_coverage=1 00:04:40.482 --rc genhtml_function_coverage=1 00:04:40.482 --rc genhtml_legend=1 00:04:40.482 --rc geninfo_all_blocks=1 00:04:40.482 ' 00:04:40.482 17:10:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:40.482 --rc lcov_branch_coverage=1 00:04:40.482 --rc lcov_function_coverage=1 00:04:40.482 --rc genhtml_branch_coverage=1 00:04:40.482 --rc genhtml_function_coverage=1 00:04:40.482 --rc genhtml_legend=1 00:04:40.482 --rc geninfo_all_blocks=1 00:04:40.482 ' 00:04:40.482 17:10:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:40.482 --rc lcov_branch_coverage=1 00:04:40.482 --rc lcov_function_coverage=1 00:04:40.482 --rc genhtml_branch_coverage=1 00:04:40.482 --rc genhtml_function_coverage=1 00:04:40.482 --rc genhtml_legend=1 00:04:40.482 --rc geninfo_all_blocks=1 00:04:40.482 --no-external' 00:04:40.482 17:10:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:40.482 --rc lcov_branch_coverage=1 00:04:40.482 --rc lcov_function_coverage=1 00:04:40.482 --rc genhtml_branch_coverage=1 00:04:40.482 --rc genhtml_function_coverage=1 00:04:40.482 --rc genhtml_legend=1 00:04:40.482 --rc geninfo_all_blocks=1 00:04:40.482 --no-external' 00:04:40.482 17:10:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:40.482 lcov: LCOV version 1.14 00:04:40.482 17:10:51 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:55.420 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.420 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:10.385 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:10.385 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:10.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:10.386 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:10.387 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:10.387 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:12.285 17:11:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:12.285 17:11:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.285 17:11:23 -- common/autotest_common.sh@10 -- # set +x 00:05:12.285 17:11:23 -- spdk/autotest.sh@91 -- # rm -f 00:05:12.285 17:11:23 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.428 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:13.428 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:13.428 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:13.686 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:13.686 17:11:24 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:13.686 17:11:24 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:13.686 17:11:24 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:13.686 17:11:24 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2c2n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme2c2n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n2 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme3n2 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:13.686 17:11:24 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n3 00:05:13.686 17:11:24 -- common/autotest_common.sh@1662 -- # local device=nvme3n3 00:05:13.686 17:11:24 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:13.686 17:11:24 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:13.686 17:11:24 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:13.686 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.686 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.686 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:13.686 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:13.686 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:13.686 No valid GPT data, bailing 00:05:13.686 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.686 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.686 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.686 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:13.686 1+0 records in 00:05:13.686 1+0 records out 00:05:13.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154216 s, 68.0 MB/s 00:05:13.686 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.686 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.686 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:13.686 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:13.686 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:13.686 No valid GPT data, bailing 00:05:13.686 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.686 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.686 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.686 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:13.686 1+0 records in 00:05:13.686 1+0 records out 00:05:13.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397625 s, 264 MB/s 00:05:13.686 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.686 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.686 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:13.686 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:13.686 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:13.944 No valid GPT data, bailing 00:05:13.944 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:13.944 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.944 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.944 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:13.944 1+0 records in 00:05:13.944 1+0 records out 00:05:13.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505956 s, 207 MB/s 00:05:13.945 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.945 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.945 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:13.945 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:13.945 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:13.945 No valid GPT data, bailing 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.945 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.945 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:13.945 1+0 records in 00:05:13.945 1+0 records out 00:05:13.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037497 s, 280 MB/s 00:05:13.945 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.945 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.945 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n2 00:05:13.945 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme3n2 pt 00:05:13.945 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:13.945 No valid GPT data, bailing 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.945 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.945 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:13.945 1+0 records in 00:05:13.945 1+0 records out 00:05:13.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508927 s, 206 MB/s 00:05:13.945 17:11:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.945 17:11:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:13.945 17:11:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n3 00:05:13.945 17:11:24 -- scripts/common.sh@378 -- # local block=/dev/nvme3n3 pt 00:05:13.945 17:11:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:13.945 No valid GPT data, bailing 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:13.945 17:11:24 -- scripts/common.sh@391 -- # pt= 00:05:13.945 17:11:24 -- scripts/common.sh@392 -- # return 1 00:05:13.945 17:11:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:13.945 1+0 records in 00:05:13.945 1+0 records out 00:05:13.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533896 s, 196 MB/s 00:05:13.945 17:11:24 -- spdk/autotest.sh@118 -- # sync 00:05:14.202 17:11:24 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.202 17:11:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.203 17:11:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:16.101 17:11:26 -- spdk/autotest.sh@124 -- # uname -s 00:05:16.101 17:11:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:16.101 17:11:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:16.101 17:11:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.101 17:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.101 17:11:26 -- common/autotest_common.sh@10 -- # set +x 00:05:16.101 ************************************ 00:05:16.101 START TEST setup.sh 00:05:16.101 ************************************ 00:05:16.101 17:11:26 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:16.101 * Looking for test storage... 00:05:16.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.101 17:11:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:16.101 17:11:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:16.101 17:11:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:16.102 17:11:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.102 17:11:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.102 17:11:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.102 ************************************ 00:05:16.102 START TEST acl 00:05:16.102 ************************************ 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:16.102 * Looking for test storage... 00:05:16.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2c2n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2c2n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n2 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n2 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n3 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n3 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:16.102 17:11:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:16.102 17:11:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:16.102 17:11:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.102 17:11:26 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.477 17:11:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:17.477 17:11:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:17.477 17:11:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.477 17:11:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:17.477 17:11:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.477 17:11:27 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.736 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:17.736 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:17.736 17:11:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.308 Hugepages 00:05:18.308 node hugesize free / total 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.308 00:05:18.308 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:18.308 17:11:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.308 17:11:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:18.574 17:11:29 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:18.574 17:11:29 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.574 17:11:29 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.574 17:11:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:18.574 ************************************ 00:05:18.574 START TEST denied 00:05:18.574 ************************************ 00:05:18.574 17:11:29 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:18.574 17:11:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:18.574 17:11:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:18.574 17:11:29 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:18.574 17:11:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.574 17:11:29 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.948 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.948 17:11:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.539 00:05:26.539 real 0m7.107s 00:05:26.539 user 0m0.830s 00:05:26.539 sys 0m1.314s 00:05:26.539 17:11:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.539 17:11:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:26.539 ************************************ 00:05:26.539 END TEST denied 00:05:26.539 ************************************ 00:05:26.539 17:11:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:26.539 17:11:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:26.539 17:11:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.539 17:11:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.539 17:11:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:26.539 ************************************ 00:05:26.539 START TEST allowed 00:05:26.539 ************************************ 00:05:26.539 17:11:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:26.539 17:11:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:26.539 17:11:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:26.539 17:11:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:26.539 17:11:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.539 17:11:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.105 17:11:37 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.041 00:05:28.041 real 0m2.267s 00:05:28.041 user 0m1.063s 00:05:28.041 sys 0m1.190s 00:05:28.041 17:11:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.041 17:11:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:28.041 ************************************ 00:05:28.041 END TEST allowed 00:05:28.041 ************************************ 00:05:28.041 17:11:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:28.041 00:05:28.041 real 0m12.027s 00:05:28.041 user 0m3.150s 00:05:28.041 sys 0m3.904s 00:05:28.041 17:11:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.041 17:11:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:28.041 ************************************ 00:05:28.041 END TEST acl 00:05:28.041 ************************************ 00:05:28.041 17:11:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:28.041 17:11:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:28.041 17:11:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.041 17:11:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.041 17:11:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:28.041 ************************************ 00:05:28.041 START TEST hugepages 00:05:28.041 ************************************ 00:05:28.041 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:28.301 * Looking for test storage... 00:05:28.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4400744 kB' 'MemAvailable: 7380628 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 444544 kB' 'Inactive: 2842432 kB' 'Active(anon): 112504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842432 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 103408 kB' 'Mapped: 48772 kB' 'Shmem: 10516 kB' 'KReclaimable: 84700 kB' 'Slab: 164960 kB' 'SReclaimable: 84700 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 6540 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.301 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.302 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:28.303 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:28.303 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.303 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.303 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.303 ************************************ 00:05:28.303 START TEST default_setup 00:05:28.303 ************************************ 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.303 17:11:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.436 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.436 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.436 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.436 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6495984 kB' 'MemAvailable: 9475632 kB' 'Buffers: 2436 kB' 'Cached: 3182540 kB' 'SwapCached: 0 kB' 'Active: 462644 kB' 'Inactive: 2842452 kB' 'Active(anon): 130604 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48916 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164468 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80280 kB' 'KernelStack: 6592 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.436 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.437 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6495812 kB' 'MemAvailable: 9475460 kB' 'Buffers: 2436 kB' 'Cached: 3182540 kB' 'SwapCached: 0 kB' 'Active: 462592 kB' 'Inactive: 2842452 kB' 'Active(anon): 130552 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'KernelStack: 6576 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.701 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.702 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496392 kB' 'MemAvailable: 9476040 kB' 'Buffers: 2436 kB' 'Cached: 3182540 kB' 'SwapCached: 0 kB' 'Active: 462672 kB' 'Inactive: 2842452 kB' 'Active(anon): 130632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121696 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'KernelStack: 6544 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.703 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.704 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:29.705 nr_hugepages=1024 00:05:29.705 resv_hugepages=0 00:05:29.705 surplus_hugepages=0 00:05:29.705 anon_hugepages=0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496776 kB' 'MemAvailable: 9476424 kB' 'Buffers: 2436 kB' 'Cached: 3182540 kB' 'SwapCached: 0 kB' 'Active: 462436 kB' 'Inactive: 2842452 kB' 'Active(anon): 130396 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121508 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.705 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.706 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.707 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496776 kB' 'MemUsed: 5745204 kB' 'SwapCached: 0 kB' 'Active: 462408 kB' 'Inactive: 2842452 kB' 'Active(anon): 130368 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 3184976 kB' 'Mapped: 48772 kB' 'AnonPages: 121472 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.708 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.709 node0=1024 expecting 1024 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:29.709 ************************************ 00:05:29.709 END TEST default_setup 00:05:29.709 ************************************ 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:29.709 00:05:29.709 real 0m1.446s 00:05:29.709 user 0m0.641s 00:05:29.709 sys 0m0.742s 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.709 17:11:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:29.709 17:11:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:29.709 17:11:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:29.709 17:11:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.709 17:11:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.709 17:11:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.709 ************************************ 00:05:29.709 START TEST per_node_1G_alloc 00:05:29.709 ************************************ 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.709 17:11:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.280 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.280 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.280 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548200 kB' 'MemAvailable: 10527864 kB' 'Buffers: 2436 kB' 'Cached: 3182548 kB' 'SwapCached: 0 kB' 'Active: 463020 kB' 'Inactive: 2842468 kB' 'Active(anon): 130980 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122072 kB' 'Mapped: 49044 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164476 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80288 kB' 'KernelStack: 6628 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.281 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548200 kB' 'MemAvailable: 10527860 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462536 kB' 'Inactive: 2842464 kB' 'Active(anon): 130496 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121600 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164592 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80404 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.282 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.283 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548564 kB' 'MemAvailable: 10528224 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462540 kB' 'Inactive: 2842464 kB' 'Active(anon): 130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121608 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164592 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80404 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.284 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.285 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:30.286 nr_hugepages=512 00:05:30.286 resv_hugepages=0 00:05:30.286 surplus_hugepages=0 00:05:30.286 anon_hugepages=0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.286 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548564 kB' 'MemAvailable: 10528224 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462424 kB' 'Inactive: 2842464 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121488 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164592 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80404 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.548 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548564 kB' 'MemUsed: 4693416 kB' 'SwapCached: 0 kB' 'Active: 462496 kB' 'Inactive: 2842464 kB' 'Active(anon): 130456 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 3184980 kB' 'Mapped: 48776 kB' 'AnonPages: 121612 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84188 kB' 'Slab: 164592 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.549 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:30.550 node0=512 expecting 512 00:05:30.550 ************************************ 00:05:30.550 END TEST per_node_1G_alloc 00:05:30.550 ************************************ 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:30.550 00:05:30.550 real 0m0.717s 00:05:30.550 user 0m0.332s 00:05:30.550 sys 0m0.398s 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.550 17:11:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:30.550 17:11:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:30.550 17:11:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:30.550 17:11:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.550 17:11:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.550 17:11:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.550 ************************************ 00:05:30.550 START TEST even_2G_alloc 00:05:30.550 ************************************ 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.550 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.068 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.068 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.068 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.068 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.068 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496080 kB' 'MemAvailable: 9475740 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462948 kB' 'Inactive: 2842464 kB' 'Active(anon): 130908 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121968 kB' 'Mapped: 48844 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164504 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80316 kB' 'KernelStack: 6608 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.069 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496600 kB' 'MemAvailable: 9476260 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462524 kB' 'Inactive: 2842464 kB' 'Active(anon): 130484 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121636 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164488 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80300 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.070 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.071 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496672 kB' 'MemAvailable: 9476332 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462500 kB' 'Inactive: 2842464 kB' 'Active(anon): 130460 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121600 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164488 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80300 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.072 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.073 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.074 nr_hugepages=1024 00:05:31.074 resv_hugepages=0 00:05:31.074 surplus_hugepages=0 00:05:31.074 anon_hugepages=0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496672 kB' 'MemAvailable: 9476332 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462528 kB' 'Inactive: 2842464 kB' 'Active(anon): 130488 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164476 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80288 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.074 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.075 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496672 kB' 'MemUsed: 5745308 kB' 'SwapCached: 0 kB' 'Active: 462564 kB' 'Inactive: 2842464 kB' 'Active(anon): 130524 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3184980 kB' 'Mapped: 48776 kB' 'AnonPages: 121632 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84188 kB' 'Slab: 164476 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.334 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:31.335 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:31.336 node0=1024 expecting 1024 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:31.336 00:05:31.336 real 0m0.714s 00:05:31.336 user 0m0.328s 00:05:31.336 sys 0m0.402s 00:05:31.336 ************************************ 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.336 17:11:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:31.336 END TEST even_2G_alloc 00:05:31.336 ************************************ 00:05:31.336 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:31.336 17:11:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:31.336 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.336 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.336 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:31.336 ************************************ 00:05:31.336 START TEST odd_alloc 00:05:31.336 ************************************ 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.336 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.856 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.856 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.857 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.857 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6504824 kB' 'MemAvailable: 9484484 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 463112 kB' 'Inactive: 2842464 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122484 kB' 'Mapped: 49060 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164448 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 6532 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.857 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6504588 kB' 'MemAvailable: 9484248 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462672 kB' 'Inactive: 2842464 kB' 'Active(anon): 130632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122112 kB' 'Mapped: 48888 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164476 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80288 kB' 'KernelStack: 6548 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.858 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.859 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6504588 kB' 'MemAvailable: 9484248 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462128 kB' 'Inactive: 2842464 kB' 'Active(anon): 130088 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121492 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164484 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80296 kB' 'KernelStack: 6528 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.860 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.861 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:31.862 nr_hugepages=1025 00:05:31.862 resv_hugepages=0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.862 surplus_hugepages=0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.862 anon_hugepages=0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6504588 kB' 'MemAvailable: 9484248 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462428 kB' 'Inactive: 2842464 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121556 kB' 'Mapped: 48772 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'KernelStack: 6560 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.862 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.863 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.864 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6504588 kB' 'MemUsed: 5737392 kB' 'SwapCached: 0 kB' 'Active: 462420 kB' 'Inactive: 2842464 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 3184980 kB' 'Mapped: 48772 kB' 'AnonPages: 121544 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84188 kB' 'Slab: 164472 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:32.123 node0=1025 expecting 1025 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:32.123 00:05:32.123 real 0m0.708s 00:05:32.123 user 0m0.335s 00:05:32.123 sys 0m0.420s 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.123 17:11:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.123 ************************************ 00:05:32.123 END TEST odd_alloc 00:05:32.123 ************************************ 00:05:32.123 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:32.123 17:11:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:32.123 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.123 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.123 17:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.123 ************************************ 00:05:32.123 START TEST custom_alloc 00:05:32.123 ************************************ 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:32.123 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.124 17:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.644 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.644 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.644 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.644 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549804 kB' 'MemAvailable: 10529464 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462920 kB' 'Inactive: 2842464 kB' 'Active(anon): 130880 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122020 kB' 'Mapped: 49152 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164508 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80320 kB' 'KernelStack: 6580 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.644 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549804 kB' 'MemAvailable: 10529464 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462312 kB' 'Inactive: 2842464 kB' 'Active(anon): 130272 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121380 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164532 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80344 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.645 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549804 kB' 'MemAvailable: 10529464 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462560 kB' 'Inactive: 2842464 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121624 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84188 kB' 'Slab: 164532 kB' 'SReclaimable: 84188 kB' 'SUnreclaim: 80344 kB' 'KernelStack: 6576 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.646 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.647 nr_hugepages=512 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:32.647 resv_hugepages=0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.647 surplus_hugepages=0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.647 anon_hugepages=0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549804 kB' 'MemAvailable: 10529472 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 462476 kB' 'Inactive: 2842464 kB' 'Active(anon): 130436 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121540 kB' 'Mapped: 48776 kB' 'Shmem: 10476 kB' 'KReclaimable: 84204 kB' 'Slab: 164548 kB' 'SReclaimable: 84204 kB' 'SUnreclaim: 80344 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.647 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549804 kB' 'MemUsed: 4692176 kB' 'SwapCached: 0 kB' 'Active: 462228 kB' 'Inactive: 2842464 kB' 'Active(anon): 130188 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 3184980 kB' 'Mapped: 48776 kB' 'AnonPages: 121604 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84204 kB' 'Slab: 164548 kB' 'SReclaimable: 84204 kB' 'SUnreclaim: 80344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.648 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.649 node0=512 expecting 512 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:32.649 00:05:32.649 real 0m0.668s 00:05:32.649 user 0m0.335s 00:05:32.649 sys 0m0.379s 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.649 ************************************ 00:05:32.649 17:11:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.649 END TEST custom_alloc 00:05:32.649 ************************************ 00:05:32.649 17:11:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:32.649 17:11:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:32.649 17:11:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.649 17:11:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.649 17:11:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.649 ************************************ 00:05:32.649 START TEST no_shrink_alloc 00:05:32.649 ************************************ 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.649 17:11:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.236 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.236 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.236 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.236 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.236 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6497608 kB' 'MemAvailable: 9477272 kB' 'Buffers: 2436 kB' 'Cached: 3182548 kB' 'SwapCached: 0 kB' 'Active: 459672 kB' 'Inactive: 2842468 kB' 'Active(anon): 127632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119076 kB' 'Mapped: 48544 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164416 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 6564 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.237 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6497608 kB' 'MemAvailable: 9477268 kB' 'Buffers: 2436 kB' 'Cached: 3182544 kB' 'SwapCached: 0 kB' 'Active: 459624 kB' 'Inactive: 2842464 kB' 'Active(anon): 127584 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118784 kB' 'Mapped: 48368 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164408 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80216 kB' 'KernelStack: 6520 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.238 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.239 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6497860 kB' 'MemAvailable: 9477528 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459540 kB' 'Inactive: 2842472 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 48428 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164408 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80216 kB' 'KernelStack: 6520 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.240 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.241 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.241 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.500 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.500 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.500 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.500 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.500 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.501 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:33.502 nr_hugepages=1024 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:33.502 resv_hugepages=0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.502 surplus_hugepages=0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.502 anon_hugepages=0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6497860 kB' 'MemAvailable: 9477528 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459416 kB' 'Inactive: 2842472 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118548 kB' 'Mapped: 48236 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164396 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80204 kB' 'KernelStack: 6504 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.502 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:33.503 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6497860 kB' 'MemUsed: 5744120 kB' 'SwapCached: 0 kB' 'Active: 459484 kB' 'Inactive: 2842472 kB' 'Active(anon): 127444 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3184988 kB' 'Mapped: 48236 kB' 'AnonPages: 118636 kB' 'Shmem: 10476 kB' 'KernelStack: 6520 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84192 kB' 'Slab: 164396 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.504 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:33.505 node0=1024 expecting 1024 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.505 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.025 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.025 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.025 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.025 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.025 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496380 kB' 'MemAvailable: 9476048 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459804 kB' 'Inactive: 2842472 kB' 'Active(anon): 127764 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118872 kB' 'Mapped: 48220 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164392 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80200 kB' 'KernelStack: 6492 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.025 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.026 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496380 kB' 'MemAvailable: 9476048 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459568 kB' 'Inactive: 2842472 kB' 'Active(anon): 127528 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118720 kB' 'Mapped: 48236 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164352 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80160 kB' 'KernelStack: 6488 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.027 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.028 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496640 kB' 'MemAvailable: 9476308 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459580 kB' 'Inactive: 2842472 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118716 kB' 'Mapped: 48236 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164352 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80160 kB' 'KernelStack: 6488 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.029 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.030 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.031 nr_hugepages=1024 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:34.031 resv_hugepages=0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.031 surplus_hugepages=0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.031 anon_hugepages=0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496640 kB' 'MemAvailable: 9476308 kB' 'Buffers: 2436 kB' 'Cached: 3182552 kB' 'SwapCached: 0 kB' 'Active: 459560 kB' 'Inactive: 2842472 kB' 'Active(anon): 127520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118716 kB' 'Mapped: 48236 kB' 'Shmem: 10476 kB' 'KReclaimable: 84192 kB' 'Slab: 164352 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80160 kB' 'KernelStack: 6488 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 5050368 kB' 'DirectMap1G: 9437184 kB' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.031 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.032 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6496640 kB' 'MemUsed: 5745340 kB' 'SwapCached: 0 kB' 'Active: 459544 kB' 'Inactive: 2842472 kB' 'Active(anon): 127504 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2842472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 3184988 kB' 'Mapped: 48236 kB' 'AnonPages: 118640 kB' 'Shmem: 10476 kB' 'KernelStack: 6488 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84192 kB' 'Slab: 164352 kB' 'SReclaimable: 84192 kB' 'SUnreclaim: 80160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.033 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.034 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.035 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.035 node0=1024 expecting 1024 00:05:34.035 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:34.035 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:34.035 00:05:34.035 real 0m1.345s 00:05:34.035 user 0m0.651s 00:05:34.035 sys 0m0.784s 00:05:34.035 17:11:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.035 17:11:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:34.035 ************************************ 00:05:34.035 END TEST no_shrink_alloc 00:05:34.035 ************************************ 00:05:34.293 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:34.293 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:34.293 ************************************ 00:05:34.293 END TEST hugepages 00:05:34.293 ************************************ 00:05:34.293 00:05:34.293 real 0m6.015s 00:05:34.293 user 0m2.779s 00:05:34.293 sys 0m3.371s 00:05:34.293 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.293 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:34.293 17:11:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:34.293 17:11:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:34.293 17:11:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.293 17:11:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.293 17:11:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:34.293 ************************************ 00:05:34.293 START TEST driver 00:05:34.293 ************************************ 00:05:34.293 17:11:44 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:34.293 * Looking for test storage... 00:05:34.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:34.293 17:11:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:34.293 17:11:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.293 17:11:45 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.863 17:11:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:40.863 17:11:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.863 17:11:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.863 17:11:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:40.863 ************************************ 00:05:40.863 START TEST guess_driver 00:05:40.863 ************************************ 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:40.863 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:40.863 Looking for driver=uio_pci_generic 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.863 17:11:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.863 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:40.863 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:40.863 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.429 17:11:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.982 00:05:47.982 real 0m7.108s 00:05:47.982 user 0m0.772s 00:05:47.982 sys 0m1.401s 00:05:47.982 17:11:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.982 17:11:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:47.982 ************************************ 00:05:47.982 END TEST guess_driver 00:05:47.982 ************************************ 00:05:47.982 17:11:58 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:47.982 00:05:47.982 real 0m13.168s 00:05:47.982 user 0m1.128s 00:05:47.982 sys 0m2.212s 00:05:47.982 17:11:58 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.982 ************************************ 00:05:47.982 END TEST driver 00:05:47.982 ************************************ 00:05:47.982 17:11:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:47.982 17:11:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:47.983 17:11:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:47.983 17:11:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.983 17:11:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.983 17:11:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.983 ************************************ 00:05:47.983 START TEST devices 00:05:47.983 ************************************ 00:05:47.983 17:11:58 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:47.983 * Looking for test storage... 00:05:47.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:47.983 17:11:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:47.983 17:11:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:47.983 17:11:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.983 17:11:58 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:48.548 17:11:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:48.548 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:48.548 No valid GPT data, bailing 00:05:48.548 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:48.548 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:48.548 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:48.548 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:48.548 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:48.807 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:48.807 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:48.807 No valid GPT data, bailing 00:05:48.807 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:48.807 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:48.807 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:48.807 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:48.807 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:48.807 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:48.807 17:11:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:48.808 No valid GPT data, bailing 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:48.808 No valid GPT data, bailing 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:48.808 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:48.808 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:48.808 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:49.066 No valid GPT data, bailing 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:49.066 No valid GPT data, bailing 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:49.066 17:11:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:49.066 17:11:59 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:49.066 17:11:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:49.066 17:11:59 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.066 17:11:59 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.066 17:11:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:49.066 ************************************ 00:05:49.066 START TEST nvme_mount 00:05:49.066 ************************************ 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:49.066 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:50.028 Creating new GPT entries in memory. 00:05:50.028 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:50.028 other utilities. 00:05:50.028 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:50.028 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.028 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.028 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.028 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:51.401 Creating new GPT entries in memory. 00:05:51.401 The operation has completed successfully. 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 73112 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.401 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.401 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.659 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.916 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.916 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:52.174 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.174 17:12:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:52.432 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:52.432 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:52.432 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:52.432 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.432 17:12:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.692 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.951 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.209 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.209 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.467 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.725 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.983 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.983 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.983 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.983 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.240 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.240 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.499 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.499 ************************************ 00:05:54.499 END TEST nvme_mount 00:05:54.499 ************************************ 00:05:54.499 00:05:54.499 real 0m5.374s 00:05:54.499 user 0m1.452s 00:05:54.499 sys 0m1.594s 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.499 17:12:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:54.499 17:12:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:54.499 17:12:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:54.499 17:12:05 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.499 17:12:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.499 17:12:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.499 ************************************ 00:05:54.499 START TEST dm_mount 00:05:54.499 ************************************ 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:54.499 17:12:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:55.434 Creating new GPT entries in memory. 00:05:55.434 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:55.434 other utilities. 00:05:55.434 17:12:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:55.434 17:12:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.434 17:12:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:55.434 17:12:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:55.434 17:12:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:56.806 Creating new GPT entries in memory. 00:05:56.806 The operation has completed successfully. 00:05:56.806 17:12:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:56.806 17:12:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.806 17:12:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:56.806 17:12:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:56.806 17:12:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:57.740 The operation has completed successfully. 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 73737 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.740 17:12:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:57.998 17:12:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.563 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:58.821 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.078 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.335 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.335 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.631 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:59.632 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:59.632 00:05:59.632 real 0m5.117s 00:05:59.632 user 0m0.958s 00:05:59.632 sys 0m1.088s 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.632 ************************************ 00:05:59.632 END TEST dm_mount 00:05:59.632 ************************************ 00:05:59.632 17:12:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:59.632 17:12:10 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:59.632 17:12:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:59.890 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:59.890 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:59.890 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:59.890 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:59.890 17:12:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:59.891 17:12:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:59.891 00:05:59.891 real 0m12.535s 00:05:59.891 user 0m3.337s 00:05:59.891 sys 0m3.496s 00:05:59.891 17:12:10 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.891 17:12:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 ************************************ 00:05:59.891 END TEST devices 00:05:59.891 ************************************ 00:05:59.891 17:12:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:59.891 00:05:59.891 real 0m44.024s 00:05:59.891 user 0m10.489s 00:05:59.891 sys 0m13.157s 00:05:59.891 17:12:10 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.891 17:12:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:59.891 ************************************ 00:05:59.891 END TEST setup.sh 00:05:59.891 ************************************ 00:06:00.148 17:12:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.148 17:12:10 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:00.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.970 Hugepages 00:06:00.970 node hugesize free / total 00:06:00.970 node0 1048576kB 0 / 0 00:06:00.970 node0 2048kB 2048 / 2048 00:06:00.970 00:06:00.970 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:00.970 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:01.228 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:01.228 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:01.228 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:01.485 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:01.485 17:12:12 -- spdk/autotest.sh@130 -- # uname -s 00:06:01.485 17:12:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:01.485 17:12:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:01.485 17:12:12 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.675 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.675 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.675 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.675 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.675 17:12:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:03.606 17:12:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:03.606 17:12:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:03.606 17:12:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:03.606 17:12:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:03.606 17:12:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:03.606 17:12:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:03.606 17:12:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.606 17:12:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.606 17:12:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:03.606 17:12:14 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:03.606 17:12:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:03.606 17:12:14 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.129 Waiting for block devices as requested 00:06:04.129 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.388 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.388 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.388 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:09.671 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:09.671 17:12:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:09.671 17:12:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:09.671 17:12:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:09.671 17:12:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:09.671 17:12:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1557 -- # continue 00:06:09.671 17:12:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:09.671 17:12:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:09.671 17:12:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:09.671 17:12:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:09.671 17:12:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:09.671 17:12:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:09.671 17:12:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:09.672 17:12:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1557 -- # continue 00:06:09.672 17:12:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:09.672 17:12:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:09.672 17:12:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:09.672 17:12:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:09.672 17:12:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1557 -- # continue 00:06:09.672 17:12:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:09.672 17:12:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:09.672 17:12:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:09.672 17:12:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:09.672 17:12:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:09.672 17:12:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:09.672 17:12:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:09.672 17:12:20 -- common/autotest_common.sh@1557 -- # continue 00:06:09.672 17:12:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:09.672 17:12:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.672 17:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.672 17:12:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:09.672 17:12:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.672 17:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.672 17:12:20 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:10.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.799 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.799 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.799 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.799 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.799 17:12:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:10.799 17:12:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.799 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.799 17:12:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:10.799 17:12:21 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:10.799 17:12:21 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:10.799 17:12:21 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:10.799 17:12:21 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:10.799 17:12:21 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:11.055 17:12:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:11.055 17:12:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:11.055 17:12:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:11.055 17:12:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.055 17:12:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:11.055 17:12:21 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:11.055 17:12:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:11.055 17:12:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:11.055 17:12:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:11.055 17:12:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:11.056 17:12:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:11.056 17:12:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:11.056 17:12:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:11.056 17:12:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:11.056 17:12:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:11.056 17:12:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:11.056 17:12:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:11.056 17:12:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:11.056 17:12:21 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:11.056 17:12:21 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:11.056 17:12:21 -- common/autotest_common.sh@1593 -- # return 0 00:06:11.056 17:12:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:11.056 17:12:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:11.056 17:12:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.056 17:12:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.056 17:12:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:11.056 17:12:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.056 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 17:12:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:11.056 17:12:21 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:11.056 17:12:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.056 17:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.056 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 ************************************ 00:06:11.056 START TEST env 00:06:11.056 ************************************ 00:06:11.056 17:12:21 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:11.056 * Looking for test storage... 00:06:11.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:11.056 17:12:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:11.056 17:12:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.056 17:12:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.056 17:12:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.056 ************************************ 00:06:11.056 START TEST env_memory 00:06:11.056 ************************************ 00:06:11.056 17:12:21 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:11.056 00:06:11.056 00:06:11.056 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.056 http://cunit.sourceforge.net/ 00:06:11.056 00:06:11.056 00:06:11.056 Suite: memory 00:06:11.313 Test: alloc and free memory map ...[2024-07-15 17:12:21.938055] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:11.313 passed 00:06:11.313 Test: mem map translation ...[2024-07-15 17:12:22.000110] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:11.313 [2024-07-15 17:12:22.000403] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:11.313 [2024-07-15 17:12:22.000619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:11.313 [2024-07-15 17:12:22.000767] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:11.313 passed 00:06:11.313 Test: mem map registration ...[2024-07-15 17:12:22.102927] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:11.313 [2024-07-15 17:12:22.103024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:11.313 passed 00:06:11.570 Test: mem map adjacent registrations ...passed 00:06:11.570 00:06:11.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.570 suites 1 1 n/a 0 0 00:06:11.570 tests 4 4 4 0 0 00:06:11.570 asserts 152 152 152 0 n/a 00:06:11.570 00:06:11.570 Elapsed time = 0.327 seconds 00:06:11.570 00:06:11.570 real 0m0.372s 00:06:11.570 user 0m0.328s 00:06:11.570 sys 0m0.037s 00:06:11.570 17:12:22 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.570 17:12:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:11.570 ************************************ 00:06:11.570 END TEST env_memory 00:06:11.570 ************************************ 00:06:11.570 17:12:22 env -- common/autotest_common.sh@1142 -- # return 0 00:06:11.570 17:12:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:11.570 17:12:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.570 17:12:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.570 17:12:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.570 ************************************ 00:06:11.570 START TEST env_vtophys 00:06:11.570 ************************************ 00:06:11.570 17:12:22 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:11.570 EAL: lib.eal log level changed from notice to debug 00:06:11.570 EAL: Detected lcore 0 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 1 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 2 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 3 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 4 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 5 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 6 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 7 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 8 as core 0 on socket 0 00:06:11.570 EAL: Detected lcore 9 as core 0 on socket 0 00:06:11.570 EAL: Maximum logical cores by configuration: 128 00:06:11.570 EAL: Detected CPU lcores: 10 00:06:11.570 EAL: Detected NUMA nodes: 1 00:06:11.570 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:06:11.570 EAL: Detected shared linkage of DPDK 00:06:11.570 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:06:11.570 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:06:11.570 EAL: Registered [vdev] bus. 00:06:11.570 EAL: bus.vdev log level changed from disabled to notice 00:06:11.570 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:06:11.570 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:06:11.570 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:11.570 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:11.570 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:06:11.571 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:06:11.571 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:06:11.571 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:06:11.571 EAL: No shared files mode enabled, IPC will be disabled 00:06:11.571 EAL: No shared files mode enabled, IPC is disabled 00:06:11.571 EAL: Selected IOVA mode 'PA' 00:06:11.571 EAL: Probing VFIO support... 00:06:11.571 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:11.571 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:11.571 EAL: Ask a virtual area of 0x2e000 bytes 00:06:11.571 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:11.571 EAL: Setting up physically contiguous memory... 00:06:11.571 EAL: Setting maximum number of open files to 524288 00:06:11.571 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:11.571 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:11.571 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.571 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:11.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.571 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.571 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:11.571 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:11.571 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.571 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:11.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.571 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.571 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:11.571 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:11.571 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.571 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:11.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.571 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.571 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:11.571 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:11.571 EAL: Ask a virtual area of 0x61000 bytes 00:06:11.571 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:11.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:11.571 EAL: Ask a virtual area of 0x400000000 bytes 00:06:11.571 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:11.571 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:11.571 EAL: Hugepages will be freed exactly as allocated. 00:06:11.571 EAL: No shared files mode enabled, IPC is disabled 00:06:11.571 EAL: No shared files mode enabled, IPC is disabled 00:06:11.829 EAL: TSC frequency is ~2200000 KHz 00:06:11.829 EAL: Main lcore 0 is ready (tid=7f429af06a40;cpuset=[0]) 00:06:11.829 EAL: Trying to obtain current memory policy. 00:06:11.829 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.829 EAL: Restoring previous memory policy: 0 00:06:11.829 EAL: request: mp_malloc_sync 00:06:11.829 EAL: No shared files mode enabled, IPC is disabled 00:06:11.829 EAL: Heap on socket 0 was expanded by 2MB 00:06:11.829 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:11.829 EAL: No shared files mode enabled, IPC is disabled 00:06:11.829 EAL: Mem event callback 'spdk:(nil)' registered 00:06:11.829 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:11.829 00:06:11.829 00:06:11.829 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.829 http://cunit.sourceforge.net/ 00:06:11.829 00:06:11.829 00:06:11.829 Suite: components_suite 00:06:12.087 Test: vtophys_malloc_test ...passed 00:06:12.087 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:12.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.087 EAL: Restoring previous memory policy: 4 00:06:12.087 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.087 EAL: request: mp_malloc_sync 00:06:12.087 EAL: No shared files mode enabled, IPC is disabled 00:06:12.087 EAL: Heap on socket 0 was expanded by 4MB 00:06:12.087 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.087 EAL: request: mp_malloc_sync 00:06:12.087 EAL: No shared files mode enabled, IPC is disabled 00:06:12.087 EAL: Heap on socket 0 was shrunk by 4MB 00:06:12.087 EAL: Trying to obtain current memory policy. 00:06:12.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.087 EAL: Restoring previous memory policy: 4 00:06:12.087 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.087 EAL: request: mp_malloc_sync 00:06:12.087 EAL: No shared files mode enabled, IPC is disabled 00:06:12.087 EAL: Heap on socket 0 was expanded by 6MB 00:06:12.087 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.087 EAL: request: mp_malloc_sync 00:06:12.087 EAL: No shared files mode enabled, IPC is disabled 00:06:12.087 EAL: Heap on socket 0 was shrunk by 6MB 00:06:12.087 EAL: Trying to obtain current memory policy. 00:06:12.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 10MB 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was shrunk by 10MB 00:06:12.344 EAL: Trying to obtain current memory policy. 00:06:12.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 18MB 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was shrunk by 18MB 00:06:12.344 EAL: Trying to obtain current memory policy. 00:06:12.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 34MB 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was shrunk by 34MB 00:06:12.344 EAL: Trying to obtain current memory policy. 00:06:12.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 66MB 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was shrunk by 66MB 00:06:12.344 EAL: Trying to obtain current memory policy. 00:06:12.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 130MB 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was shrunk by 130MB 00:06:12.344 EAL: Trying to obtain current memory policy. 00:06:12.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.344 EAL: Restoring previous memory policy: 4 00:06:12.344 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.344 EAL: request: mp_malloc_sync 00:06:12.344 EAL: No shared files mode enabled, IPC is disabled 00:06:12.344 EAL: Heap on socket 0 was expanded by 258MB 00:06:12.601 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.601 EAL: request: mp_malloc_sync 00:06:12.601 EAL: No shared files mode enabled, IPC is disabled 00:06:12.601 EAL: Heap on socket 0 was shrunk by 258MB 00:06:12.601 EAL: Trying to obtain current memory policy. 00:06:12.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.601 EAL: Restoring previous memory policy: 4 00:06:12.601 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.601 EAL: request: mp_malloc_sync 00:06:12.601 EAL: No shared files mode enabled, IPC is disabled 00:06:12.601 EAL: Heap on socket 0 was expanded by 514MB 00:06:12.858 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.858 EAL: request: mp_malloc_sync 00:06:12.858 EAL: No shared files mode enabled, IPC is disabled 00:06:12.858 EAL: Heap on socket 0 was shrunk by 514MB 00:06:12.858 EAL: Trying to obtain current memory policy. 00:06:12.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.115 EAL: Restoring previous memory policy: 4 00:06:13.115 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.115 EAL: request: mp_malloc_sync 00:06:13.115 EAL: No shared files mode enabled, IPC is disabled 00:06:13.115 EAL: Heap on socket 0 was expanded by 1026MB 00:06:13.372 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.630 passed 00:06:13.630 00:06:13.630 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.630 suites 1 1 n/a 0 0 00:06:13.630 tests 2 2 2 0 0 00:06:13.630 asserts 5225 5225 5225 0 n/a 00:06:13.630 00:06:13.630 Elapsed time = 1.788 secondsEAL: request: mp_malloc_sync 00:06:13.630 EAL: No shared files mode enabled, IPC is disabled 00:06:13.630 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:13.630 00:06:13.630 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.630 EAL: request: mp_malloc_sync 00:06:13.630 EAL: No shared files mode enabled, IPC is disabled 00:06:13.630 EAL: Heap on socket 0 was shrunk by 2MB 00:06:13.630 EAL: No shared files mode enabled, IPC is disabled 00:06:13.630 EAL: No shared files mode enabled, IPC is disabled 00:06:13.630 EAL: No shared files mode enabled, IPC is disabled 00:06:13.630 00:06:13.630 real 0m2.056s 00:06:13.630 user 0m0.971s 00:06:13.630 sys 0m0.936s 00:06:13.630 17:12:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.630 17:12:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:13.630 ************************************ 00:06:13.630 END TEST env_vtophys 00:06:13.630 ************************************ 00:06:13.630 17:12:24 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.630 17:12:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.630 17:12:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.630 17:12:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.630 17:12:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.630 ************************************ 00:06:13.630 START TEST env_pci 00:06:13.630 ************************************ 00:06:13.630 17:12:24 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:13.630 00:06:13.630 00:06:13.630 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.630 http://cunit.sourceforge.net/ 00:06:13.630 00:06:13.630 00:06:13.630 Suite: pci 00:06:13.630 Test: pci_hook ...[2024-07-15 17:12:24.423687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 75510 has claimed it 00:06:13.630 passed 00:06:13.630 00:06:13.630 EAL: Cannot find device (10000:00:01.0) 00:06:13.630 EAL: Failed to attach device on primary process 00:06:13.630 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.630 suites 1 1 n/a 0 0 00:06:13.630 tests 1 1 1 0 0 00:06:13.630 asserts 25 25 25 0 n/a 00:06:13.630 00:06:13.630 Elapsed time = 0.006 seconds 00:06:13.630 00:06:13.630 real 0m0.069s 00:06:13.630 user 0m0.027s 00:06:13.630 sys 0m0.041s 00:06:13.630 17:12:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.630 17:12:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:13.630 ************************************ 00:06:13.630 END TEST env_pci 00:06:13.630 ************************************ 00:06:13.887 17:12:24 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.887 17:12:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:13.887 17:12:24 env -- env/env.sh@15 -- # uname 00:06:13.887 17:12:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:13.887 17:12:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:13.887 17:12:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.887 17:12:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:13.887 17:12:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.887 17:12:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.887 ************************************ 00:06:13.887 START TEST env_dpdk_post_init 00:06:13.888 ************************************ 00:06:13.888 17:12:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.888 EAL: Detected CPU lcores: 10 00:06:13.888 EAL: Detected NUMA nodes: 1 00:06:13.888 EAL: Detected shared linkage of DPDK 00:06:13.888 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.888 EAL: Selected IOVA mode 'PA' 00:06:14.145 Starting DPDK initialization... 00:06:14.145 Starting SPDK post initialization... 00:06:14.145 SPDK NVMe probe 00:06:14.145 Attaching to 0000:00:10.0 00:06:14.145 Attaching to 0000:00:11.0 00:06:14.145 Attaching to 0000:00:12.0 00:06:14.145 Attaching to 0000:00:13.0 00:06:14.145 Attached to 0000:00:10.0 00:06:14.145 Attached to 0000:00:11.0 00:06:14.145 Attached to 0000:00:13.0 00:06:14.145 Attached to 0000:00:12.0 00:06:14.145 Cleaning up... 00:06:14.145 00:06:14.145 real 0m0.284s 00:06:14.145 user 0m0.105s 00:06:14.145 sys 0m0.080s 00:06:14.145 17:12:24 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.145 17:12:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.145 ************************************ 00:06:14.145 END TEST env_dpdk_post_init 00:06:14.145 ************************************ 00:06:14.145 17:12:24 env -- common/autotest_common.sh@1142 -- # return 0 00:06:14.145 17:12:24 env -- env/env.sh@26 -- # uname 00:06:14.145 17:12:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:14.145 17:12:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.145 17:12:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.145 17:12:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.145 17:12:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.145 ************************************ 00:06:14.145 START TEST env_mem_callbacks 00:06:14.145 ************************************ 00:06:14.145 17:12:24 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:14.145 EAL: Detected CPU lcores: 10 00:06:14.145 EAL: Detected NUMA nodes: 1 00:06:14.145 EAL: Detected shared linkage of DPDK 00:06:14.145 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.145 EAL: Selected IOVA mode 'PA' 00:06:14.404 00:06:14.404 00:06:14.404 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.404 http://cunit.sourceforge.net/ 00:06:14.404 00:06:14.404 00:06:14.404 Suite: memory 00:06:14.404 Test: test ... 00:06:14.404 register 0x200000200000 2097152 00:06:14.404 malloc 3145728 00:06:14.404 register 0x200000400000 4194304 00:06:14.404 buf 0x200000500000 len 3145728 PASSED 00:06:14.404 malloc 64 00:06:14.404 buf 0x2000004fff40 len 64 PASSED 00:06:14.404 malloc 4194304 00:06:14.404 register 0x200000800000 6291456 00:06:14.404 buf 0x200000a00000 len 4194304 PASSED 00:06:14.404 free 0x200000500000 3145728 00:06:14.404 free 0x2000004fff40 64 00:06:14.404 unregister 0x200000400000 4194304 PASSED 00:06:14.404 free 0x200000a00000 4194304 00:06:14.404 unregister 0x200000800000 6291456 PASSED 00:06:14.404 malloc 8388608 00:06:14.404 register 0x200000400000 10485760 00:06:14.404 buf 0x200000600000 len 8388608 PASSED 00:06:14.404 free 0x200000600000 8388608 00:06:14.404 unregister 0x200000400000 10485760 PASSED 00:06:14.404 passed 00:06:14.404 00:06:14.404 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.404 suites 1 1 n/a 0 0 00:06:14.404 tests 1 1 1 0 0 00:06:14.404 asserts 15 15 15 0 n/a 00:06:14.404 00:06:14.404 Elapsed time = 0.009 seconds 00:06:14.404 00:06:14.404 real 0m0.208s 00:06:14.404 user 0m0.045s 00:06:14.404 sys 0m0.062s 00:06:14.404 17:12:25 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.404 17:12:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:14.404 ************************************ 00:06:14.404 END TEST env_mem_callbacks 00:06:14.404 ************************************ 00:06:14.404 17:12:25 env -- common/autotest_common.sh@1142 -- # return 0 00:06:14.404 00:06:14.404 real 0m3.342s 00:06:14.404 user 0m1.593s 00:06:14.404 sys 0m1.373s 00:06:14.404 17:12:25 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.404 17:12:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.404 ************************************ 00:06:14.404 END TEST env 00:06:14.404 ************************************ 00:06:14.404 17:12:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.404 17:12:25 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:14.404 17:12:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.404 17:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.404 17:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.404 ************************************ 00:06:14.404 START TEST rpc 00:06:14.404 ************************************ 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:14.404 * Looking for test storage... 00:06:14.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:14.404 17:12:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=75629 00:06:14.404 17:12:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:14.404 17:12:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.404 17:12:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 75629 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@829 -- # '[' -z 75629 ']' 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.404 17:12:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.662 [2024-07-15 17:12:25.380312] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:14.662 [2024-07-15 17:12:25.380540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75629 ] 00:06:14.920 [2024-07-15 17:12:25.536619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.920 [2024-07-15 17:12:25.560659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.920 [2024-07-15 17:12:25.663742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:14.920 [2024-07-15 17:12:25.663815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 75629' to capture a snapshot of events at runtime. 00:06:14.920 [2024-07-15 17:12:25.663838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.920 [2024-07-15 17:12:25.663855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.921 [2024-07-15 17:12:25.663877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid75629 for offline analysis/debug. 00:06:14.921 [2024-07-15 17:12:25.663924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.858 17:12:26 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.858 17:12:26 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.858 17:12:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:15.858 17:12:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:15.858 17:12:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:15.858 17:12:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:15.858 17:12:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.858 17:12:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.858 17:12:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 ************************************ 00:06:15.858 START TEST rpc_integrity 00:06:15.858 ************************************ 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.858 { 00:06:15.858 "name": "Malloc0", 00:06:15.858 "aliases": [ 00:06:15.858 "fbcd178a-0b4c-4e6b-ac86-7ce596ee4287" 00:06:15.858 ], 00:06:15.858 "product_name": "Malloc disk", 00:06:15.858 "block_size": 512, 00:06:15.858 "num_blocks": 16384, 00:06:15.858 "uuid": "fbcd178a-0b4c-4e6b-ac86-7ce596ee4287", 00:06:15.858 "assigned_rate_limits": { 00:06:15.858 "rw_ios_per_sec": 0, 00:06:15.858 "rw_mbytes_per_sec": 0, 00:06:15.858 "r_mbytes_per_sec": 0, 00:06:15.858 "w_mbytes_per_sec": 0 00:06:15.858 }, 00:06:15.858 "claimed": false, 00:06:15.858 "zoned": false, 00:06:15.858 "supported_io_types": { 00:06:15.858 "read": true, 00:06:15.858 "write": true, 00:06:15.858 "unmap": true, 00:06:15.858 "flush": true, 00:06:15.858 "reset": true, 00:06:15.858 "nvme_admin": false, 00:06:15.858 "nvme_io": false, 00:06:15.858 "nvme_io_md": false, 00:06:15.858 "write_zeroes": true, 00:06:15.858 "zcopy": true, 00:06:15.858 "get_zone_info": false, 00:06:15.858 "zone_management": false, 00:06:15.858 "zone_append": false, 00:06:15.858 "compare": false, 00:06:15.858 "compare_and_write": false, 00:06:15.858 "abort": true, 00:06:15.858 "seek_hole": false, 00:06:15.858 "seek_data": false, 00:06:15.858 "copy": true, 00:06:15.858 "nvme_iov_md": false 00:06:15.858 }, 00:06:15.858 "memory_domains": [ 00:06:15.858 { 00:06:15.858 "dma_device_id": "system", 00:06:15.858 "dma_device_type": 1 00:06:15.858 }, 00:06:15.858 { 00:06:15.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.858 "dma_device_type": 2 00:06:15.858 } 00:06:15.858 ], 00:06:15.858 "driver_specific": {} 00:06:15.858 } 00:06:15.858 ]' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 [2024-07-15 17:12:26.522393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:15.858 [2024-07-15 17:12:26.522511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.858 [2024-07-15 17:12:26.522615] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:15.858 [2024-07-15 17:12:26.522654] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.858 [2024-07-15 17:12:26.525850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.858 [2024-07-15 17:12:26.525896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.858 Passthru0 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.858 { 00:06:15.858 "name": "Malloc0", 00:06:15.858 "aliases": [ 00:06:15.858 "fbcd178a-0b4c-4e6b-ac86-7ce596ee4287" 00:06:15.858 ], 00:06:15.858 "product_name": "Malloc disk", 00:06:15.858 "block_size": 512, 00:06:15.858 "num_blocks": 16384, 00:06:15.858 "uuid": "fbcd178a-0b4c-4e6b-ac86-7ce596ee4287", 00:06:15.858 "assigned_rate_limits": { 00:06:15.858 "rw_ios_per_sec": 0, 00:06:15.858 "rw_mbytes_per_sec": 0, 00:06:15.858 "r_mbytes_per_sec": 0, 00:06:15.858 "w_mbytes_per_sec": 0 00:06:15.858 }, 00:06:15.858 "claimed": true, 00:06:15.858 "claim_type": "exclusive_write", 00:06:15.858 "zoned": false, 00:06:15.858 "supported_io_types": { 00:06:15.858 "read": true, 00:06:15.858 "write": true, 00:06:15.858 "unmap": true, 00:06:15.858 "flush": true, 00:06:15.858 "reset": true, 00:06:15.858 "nvme_admin": false, 00:06:15.858 "nvme_io": false, 00:06:15.858 "nvme_io_md": false, 00:06:15.858 "write_zeroes": true, 00:06:15.858 "zcopy": true, 00:06:15.858 "get_zone_info": false, 00:06:15.858 "zone_management": false, 00:06:15.858 "zone_append": false, 00:06:15.858 "compare": false, 00:06:15.858 "compare_and_write": false, 00:06:15.858 "abort": true, 00:06:15.858 "seek_hole": false, 00:06:15.858 "seek_data": false, 00:06:15.858 "copy": true, 00:06:15.858 "nvme_iov_md": false 00:06:15.858 }, 00:06:15.858 "memory_domains": [ 00:06:15.858 { 00:06:15.858 "dma_device_id": "system", 00:06:15.858 "dma_device_type": 1 00:06:15.858 }, 00:06:15.858 { 00:06:15.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.858 "dma_device_type": 2 00:06:15.858 } 00:06:15.858 ], 00:06:15.858 "driver_specific": {} 00:06:15.858 }, 00:06:15.858 { 00:06:15.858 "name": "Passthru0", 00:06:15.858 "aliases": [ 00:06:15.858 "c2fadf09-a38d-5714-9f2a-1f84ad03e596" 00:06:15.858 ], 00:06:15.858 "product_name": "passthru", 00:06:15.858 "block_size": 512, 00:06:15.858 "num_blocks": 16384, 00:06:15.858 "uuid": "c2fadf09-a38d-5714-9f2a-1f84ad03e596", 00:06:15.858 "assigned_rate_limits": { 00:06:15.858 "rw_ios_per_sec": 0, 00:06:15.858 "rw_mbytes_per_sec": 0, 00:06:15.858 "r_mbytes_per_sec": 0, 00:06:15.858 "w_mbytes_per_sec": 0 00:06:15.858 }, 00:06:15.858 "claimed": false, 00:06:15.858 "zoned": false, 00:06:15.858 "supported_io_types": { 00:06:15.858 "read": true, 00:06:15.858 "write": true, 00:06:15.858 "unmap": true, 00:06:15.858 "flush": true, 00:06:15.858 "reset": true, 00:06:15.858 "nvme_admin": false, 00:06:15.858 "nvme_io": false, 00:06:15.858 "nvme_io_md": false, 00:06:15.858 "write_zeroes": true, 00:06:15.858 "zcopy": true, 00:06:15.858 "get_zone_info": false, 00:06:15.858 "zone_management": false, 00:06:15.858 "zone_append": false, 00:06:15.858 "compare": false, 00:06:15.858 "compare_and_write": false, 00:06:15.858 "abort": true, 00:06:15.858 "seek_hole": false, 00:06:15.858 "seek_data": false, 00:06:15.858 "copy": true, 00:06:15.858 "nvme_iov_md": false 00:06:15.858 }, 00:06:15.858 "memory_domains": [ 00:06:15.858 { 00:06:15.858 "dma_device_id": "system", 00:06:15.858 "dma_device_type": 1 00:06:15.858 }, 00:06:15.858 { 00:06:15.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.858 "dma_device_type": 2 00:06:15.858 } 00:06:15.858 ], 00:06:15.858 "driver_specific": { 00:06:15.858 "passthru": { 00:06:15.858 "name": "Passthru0", 00:06:15.858 "base_bdev_name": "Malloc0" 00:06:15.858 } 00:06:15.858 } 00:06:15.858 } 00:06:15.858 ]' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.858 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.858 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.859 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.859 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.859 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.859 17:12:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.859 00:06:15.859 real 0m0.324s 00:06:15.859 user 0m0.211s 00:06:15.859 sys 0m0.037s 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.859 ************************************ 00:06:15.859 17:12:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 END TEST rpc_integrity 00:06:15.859 ************************************ 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.116 17:12:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 ************************************ 00:06:16.116 START TEST rpc_plugins 00:06:16.116 ************************************ 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:16.116 { 00:06:16.116 "name": "Malloc1", 00:06:16.116 "aliases": [ 00:06:16.116 "8f4e4f4b-beea-441d-a044-5008c8fcf7f7" 00:06:16.116 ], 00:06:16.116 "product_name": "Malloc disk", 00:06:16.116 "block_size": 4096, 00:06:16.116 "num_blocks": 256, 00:06:16.116 "uuid": "8f4e4f4b-beea-441d-a044-5008c8fcf7f7", 00:06:16.116 "assigned_rate_limits": { 00:06:16.116 "rw_ios_per_sec": 0, 00:06:16.116 "rw_mbytes_per_sec": 0, 00:06:16.116 "r_mbytes_per_sec": 0, 00:06:16.116 "w_mbytes_per_sec": 0 00:06:16.116 }, 00:06:16.116 "claimed": false, 00:06:16.116 "zoned": false, 00:06:16.116 "supported_io_types": { 00:06:16.116 "read": true, 00:06:16.116 "write": true, 00:06:16.116 "unmap": true, 00:06:16.116 "flush": true, 00:06:16.116 "reset": true, 00:06:16.116 "nvme_admin": false, 00:06:16.116 "nvme_io": false, 00:06:16.116 "nvme_io_md": false, 00:06:16.116 "write_zeroes": true, 00:06:16.116 "zcopy": true, 00:06:16.116 "get_zone_info": false, 00:06:16.116 "zone_management": false, 00:06:16.116 "zone_append": false, 00:06:16.116 "compare": false, 00:06:16.116 "compare_and_write": false, 00:06:16.116 "abort": true, 00:06:16.116 "seek_hole": false, 00:06:16.116 "seek_data": false, 00:06:16.116 "copy": true, 00:06:16.116 "nvme_iov_md": false 00:06:16.116 }, 00:06:16.116 "memory_domains": [ 00:06:16.116 { 00:06:16.116 "dma_device_id": "system", 00:06:16.116 "dma_device_type": 1 00:06:16.116 }, 00:06:16.116 { 00:06:16.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.116 "dma_device_type": 2 00:06:16.116 } 00:06:16.116 ], 00:06:16.116 "driver_specific": {} 00:06:16.116 } 00:06:16.116 ]' 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:16.116 ************************************ 00:06:16.116 END TEST rpc_plugins 00:06:16.116 ************************************ 00:06:16.116 17:12:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:16.116 00:06:16.116 real 0m0.150s 00:06:16.116 user 0m0.098s 00:06:16.116 sys 0m0.019s 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.116 17:12:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.116 17:12:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.116 17:12:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.117 ************************************ 00:06:16.117 START TEST rpc_trace_cmd_test 00:06:16.117 ************************************ 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.117 17:12:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:16.117 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid75629", 00:06:16.117 "tpoint_group_mask": "0x8", 00:06:16.117 "iscsi_conn": { 00:06:16.117 "mask": "0x2", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "scsi": { 00:06:16.117 "mask": "0x4", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "bdev": { 00:06:16.117 "mask": "0x8", 00:06:16.117 "tpoint_mask": "0xffffffffffffffff" 00:06:16.117 }, 00:06:16.117 "nvmf_rdma": { 00:06:16.117 "mask": "0x10", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "nvmf_tcp": { 00:06:16.117 "mask": "0x20", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "ftl": { 00:06:16.117 "mask": "0x40", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "blobfs": { 00:06:16.117 "mask": "0x80", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "dsa": { 00:06:16.117 "mask": "0x200", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "thread": { 00:06:16.117 "mask": "0x400", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "nvme_pcie": { 00:06:16.117 "mask": "0x800", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "iaa": { 00:06:16.117 "mask": "0x1000", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "nvme_tcp": { 00:06:16.117 "mask": "0x2000", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "bdev_nvme": { 00:06:16.117 "mask": "0x4000", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 }, 00:06:16.117 "sock": { 00:06:16.117 "mask": "0x8000", 00:06:16.117 "tpoint_mask": "0x0" 00:06:16.117 } 00:06:16.117 }' 00:06:16.374 17:12:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:16.374 ************************************ 00:06:16.374 END TEST rpc_trace_cmd_test 00:06:16.374 ************************************ 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:16.374 00:06:16.374 real 0m0.279s 00:06:16.374 user 0m0.236s 00:06:16.374 sys 0m0.031s 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.374 17:12:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 17:12:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.632 17:12:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:16.632 17:12:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:16.632 17:12:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:16.632 17:12:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.632 17:12:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.632 17:12:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 ************************************ 00:06:16.632 START TEST rpc_daemon_integrity 00:06:16.632 ************************************ 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:16.632 { 00:06:16.632 "name": "Malloc2", 00:06:16.632 "aliases": [ 00:06:16.632 "7bf078f1-8077-4f8e-a1cd-6ad7c971c604" 00:06:16.632 ], 00:06:16.632 "product_name": "Malloc disk", 00:06:16.632 "block_size": 512, 00:06:16.632 "num_blocks": 16384, 00:06:16.632 "uuid": "7bf078f1-8077-4f8e-a1cd-6ad7c971c604", 00:06:16.632 "assigned_rate_limits": { 00:06:16.632 "rw_ios_per_sec": 0, 00:06:16.632 "rw_mbytes_per_sec": 0, 00:06:16.632 "r_mbytes_per_sec": 0, 00:06:16.632 "w_mbytes_per_sec": 0 00:06:16.632 }, 00:06:16.632 "claimed": false, 00:06:16.632 "zoned": false, 00:06:16.632 "supported_io_types": { 00:06:16.632 "read": true, 00:06:16.632 "write": true, 00:06:16.632 "unmap": true, 00:06:16.632 "flush": true, 00:06:16.632 "reset": true, 00:06:16.632 "nvme_admin": false, 00:06:16.632 "nvme_io": false, 00:06:16.632 "nvme_io_md": false, 00:06:16.632 "write_zeroes": true, 00:06:16.632 "zcopy": true, 00:06:16.632 "get_zone_info": false, 00:06:16.632 "zone_management": false, 00:06:16.632 "zone_append": false, 00:06:16.632 "compare": false, 00:06:16.632 "compare_and_write": false, 00:06:16.632 "abort": true, 00:06:16.632 "seek_hole": false, 00:06:16.632 "seek_data": false, 00:06:16.632 "copy": true, 00:06:16.632 "nvme_iov_md": false 00:06:16.632 }, 00:06:16.632 "memory_domains": [ 00:06:16.632 { 00:06:16.632 "dma_device_id": "system", 00:06:16.632 "dma_device_type": 1 00:06:16.632 }, 00:06:16.632 { 00:06:16.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.632 "dma_device_type": 2 00:06:16.632 } 00:06:16.632 ], 00:06:16.632 "driver_specific": {} 00:06:16.632 } 00:06:16.632 ]' 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 [2024-07-15 17:12:27.433330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:16.632 [2024-07-15 17:12:27.433420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.632 [2024-07-15 17:12:27.433461] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:16.632 [2024-07-15 17:12:27.433480] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.632 [2024-07-15 17:12:27.436570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.632 [2024-07-15 17:12:27.436616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:16.632 Passthru0 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.632 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:16.632 { 00:06:16.632 "name": "Malloc2", 00:06:16.632 "aliases": [ 00:06:16.632 "7bf078f1-8077-4f8e-a1cd-6ad7c971c604" 00:06:16.632 ], 00:06:16.632 "product_name": "Malloc disk", 00:06:16.632 "block_size": 512, 00:06:16.632 "num_blocks": 16384, 00:06:16.632 "uuid": "7bf078f1-8077-4f8e-a1cd-6ad7c971c604", 00:06:16.632 "assigned_rate_limits": { 00:06:16.632 "rw_ios_per_sec": 0, 00:06:16.632 "rw_mbytes_per_sec": 0, 00:06:16.632 "r_mbytes_per_sec": 0, 00:06:16.632 "w_mbytes_per_sec": 0 00:06:16.632 }, 00:06:16.632 "claimed": true, 00:06:16.632 "claim_type": "exclusive_write", 00:06:16.632 "zoned": false, 00:06:16.633 "supported_io_types": { 00:06:16.633 "read": true, 00:06:16.633 "write": true, 00:06:16.633 "unmap": true, 00:06:16.633 "flush": true, 00:06:16.633 "reset": true, 00:06:16.633 "nvme_admin": false, 00:06:16.633 "nvme_io": false, 00:06:16.633 "nvme_io_md": false, 00:06:16.633 "write_zeroes": true, 00:06:16.633 "zcopy": true, 00:06:16.633 "get_zone_info": false, 00:06:16.633 "zone_management": false, 00:06:16.633 "zone_append": false, 00:06:16.633 "compare": false, 00:06:16.633 "compare_and_write": false, 00:06:16.633 "abort": true, 00:06:16.633 "seek_hole": false, 00:06:16.633 "seek_data": false, 00:06:16.633 "copy": true, 00:06:16.633 "nvme_iov_md": false 00:06:16.633 }, 00:06:16.633 "memory_domains": [ 00:06:16.633 { 00:06:16.633 "dma_device_id": "system", 00:06:16.633 "dma_device_type": 1 00:06:16.633 }, 00:06:16.633 { 00:06:16.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.633 "dma_device_type": 2 00:06:16.633 } 00:06:16.633 ], 00:06:16.633 "driver_specific": {} 00:06:16.633 }, 00:06:16.633 { 00:06:16.633 "name": "Passthru0", 00:06:16.633 "aliases": [ 00:06:16.633 "8f174233-9f43-52d1-8cd1-005ec57a56ad" 00:06:16.633 ], 00:06:16.633 "product_name": "passthru", 00:06:16.633 "block_size": 512, 00:06:16.633 "num_blocks": 16384, 00:06:16.633 "uuid": "8f174233-9f43-52d1-8cd1-005ec57a56ad", 00:06:16.633 "assigned_rate_limits": { 00:06:16.633 "rw_ios_per_sec": 0, 00:06:16.633 "rw_mbytes_per_sec": 0, 00:06:16.633 "r_mbytes_per_sec": 0, 00:06:16.633 "w_mbytes_per_sec": 0 00:06:16.633 }, 00:06:16.633 "claimed": false, 00:06:16.633 "zoned": false, 00:06:16.633 "supported_io_types": { 00:06:16.633 "read": true, 00:06:16.633 "write": true, 00:06:16.633 "unmap": true, 00:06:16.633 "flush": true, 00:06:16.633 "reset": true, 00:06:16.633 "nvme_admin": false, 00:06:16.633 "nvme_io": false, 00:06:16.633 "nvme_io_md": false, 00:06:16.633 "write_zeroes": true, 00:06:16.633 "zcopy": true, 00:06:16.633 "get_zone_info": false, 00:06:16.633 "zone_management": false, 00:06:16.633 "zone_append": false, 00:06:16.633 "compare": false, 00:06:16.633 "compare_and_write": false, 00:06:16.633 "abort": true, 00:06:16.633 "seek_hole": false, 00:06:16.633 "seek_data": false, 00:06:16.633 "copy": true, 00:06:16.633 "nvme_iov_md": false 00:06:16.633 }, 00:06:16.633 "memory_domains": [ 00:06:16.633 { 00:06:16.633 "dma_device_id": "system", 00:06:16.633 "dma_device_type": 1 00:06:16.633 }, 00:06:16.633 { 00:06:16.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:16.633 "dma_device_type": 2 00:06:16.633 } 00:06:16.633 ], 00:06:16.633 "driver_specific": { 00:06:16.633 "passthru": { 00:06:16.633 "name": "Passthru0", 00:06:16.633 "base_bdev_name": "Malloc2" 00:06:16.633 } 00:06:16.633 } 00:06:16.633 } 00:06:16.633 ]' 00:06:16.633 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:16.891 ************************************ 00:06:16.891 END TEST rpc_daemon_integrity 00:06:16.891 ************************************ 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:16.891 00:06:16.891 real 0m0.326s 00:06:16.891 user 0m0.213s 00:06:16.891 sys 0m0.043s 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.891 17:12:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.891 17:12:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:16.891 17:12:27 rpc -- rpc/rpc.sh@84 -- # killprocess 75629 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@948 -- # '[' -z 75629 ']' 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@952 -- # kill -0 75629 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75629 00:06:16.891 killing process with pid 75629 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75629' 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@967 -- # kill 75629 00:06:16.891 17:12:27 rpc -- common/autotest_common.sh@972 -- # wait 75629 00:06:17.457 00:06:17.457 real 0m2.965s 00:06:17.457 user 0m3.747s 00:06:17.458 sys 0m0.803s 00:06:17.458 ************************************ 00:06:17.458 END TEST rpc 00:06:17.458 ************************************ 00:06:17.458 17:12:28 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.458 17:12:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.458 17:12:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.458 17:12:28 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:17.458 17:12:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.458 17:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.458 17:12:28 -- common/autotest_common.sh@10 -- # set +x 00:06:17.458 ************************************ 00:06:17.458 START TEST skip_rpc 00:06:17.458 ************************************ 00:06:17.458 17:12:28 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:17.458 * Looking for test storage... 00:06:17.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:17.458 17:12:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.458 17:12:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:17.458 17:12:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:17.458 17:12:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.458 17:12:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.458 17:12:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.458 ************************************ 00:06:17.458 START TEST skip_rpc 00:06:17.458 ************************************ 00:06:17.458 17:12:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:17.458 17:12:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=75828 00:06:17.458 17:12:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.458 17:12:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:17.458 17:12:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:17.716 [2024-07-15 17:12:28.377211] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:17.716 [2024-07-15 17:12:28.377389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75828 ] 00:06:17.716 [2024-07-15 17:12:28.522449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.716 [2024-07-15 17:12:28.543338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.975 [2024-07-15 17:12:28.645001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 75828 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 75828 ']' 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 75828 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75828 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75828' 00:06:23.256 killing process with pid 75828 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 75828 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 75828 00:06:23.256 00:06:23.256 real 0m5.712s 00:06:23.256 user 0m5.216s 00:06:23.256 sys 0m0.381s 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.256 ************************************ 00:06:23.256 END TEST skip_rpc 00:06:23.256 ************************************ 00:06:23.256 17:12:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.256 17:12:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:23.256 17:12:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:23.256 17:12:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.256 17:12:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.256 17:12:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.256 ************************************ 00:06:23.256 START TEST skip_rpc_with_json 00:06:23.256 ************************************ 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:23.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=75917 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 75917 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 75917 ']' 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.256 17:12:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:23.514 [2024-07-15 17:12:34.137263] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:23.514 [2024-07-15 17:12:34.137459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75917 ] 00:06:23.514 [2024-07-15 17:12:34.281791] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.514 [2024-07-15 17:12:34.295733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.773 [2024-07-15 17:12:34.427352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.338 [2024-07-15 17:12:35.019212] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:24.338 request: 00:06:24.338 { 00:06:24.338 "trtype": "tcp", 00:06:24.338 "method": "nvmf_get_transports", 00:06:24.338 "req_id": 1 00:06:24.338 } 00:06:24.338 Got JSON-RPC error response 00:06:24.338 response: 00:06:24.338 { 00:06:24.338 "code": -19, 00:06:24.338 "message": "No such device" 00:06:24.338 } 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.338 [2024-07-15 17:12:35.027503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.338 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.595 { 00:06:24.596 "subsystems": [ 00:06:24.596 { 00:06:24.596 "subsystem": "keyring", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "iobuf", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "iobuf_set_options", 00:06:24.596 "params": { 00:06:24.596 "small_pool_count": 8192, 00:06:24.596 "large_pool_count": 1024, 00:06:24.596 "small_bufsize": 8192, 00:06:24.596 "large_bufsize": 135168 00:06:24.596 } 00:06:24.596 } 00:06:24.596 ] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "sock", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "sock_set_default_impl", 00:06:24.596 "params": { 00:06:24.596 "impl_name": "posix" 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "sock_impl_set_options", 00:06:24.596 "params": { 00:06:24.596 "impl_name": "ssl", 00:06:24.596 "recv_buf_size": 4096, 00:06:24.596 "send_buf_size": 4096, 00:06:24.596 "enable_recv_pipe": true, 00:06:24.596 "enable_quickack": false, 00:06:24.596 "enable_placement_id": 0, 00:06:24.596 "enable_zerocopy_send_server": true, 00:06:24.596 "enable_zerocopy_send_client": false, 00:06:24.596 "zerocopy_threshold": 0, 00:06:24.596 "tls_version": 0, 00:06:24.596 "enable_ktls": false 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "sock_impl_set_options", 00:06:24.596 "params": { 00:06:24.596 "impl_name": "posix", 00:06:24.596 "recv_buf_size": 2097152, 00:06:24.596 "send_buf_size": 2097152, 00:06:24.596 "enable_recv_pipe": true, 00:06:24.596 "enable_quickack": false, 00:06:24.596 "enable_placement_id": 0, 00:06:24.596 "enable_zerocopy_send_server": true, 00:06:24.596 "enable_zerocopy_send_client": false, 00:06:24.596 "zerocopy_threshold": 0, 00:06:24.596 "tls_version": 0, 00:06:24.596 "enable_ktls": false 00:06:24.596 } 00:06:24.596 } 00:06:24.596 ] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "vmd", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "accel", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "accel_set_options", 00:06:24.596 "params": { 00:06:24.596 "small_cache_size": 128, 00:06:24.596 "large_cache_size": 16, 00:06:24.596 "task_count": 2048, 00:06:24.596 "sequence_count": 2048, 00:06:24.596 "buf_count": 2048 00:06:24.596 } 00:06:24.596 } 00:06:24.596 ] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "bdev", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "bdev_set_options", 00:06:24.596 "params": { 00:06:24.596 "bdev_io_pool_size": 65535, 00:06:24.596 "bdev_io_cache_size": 256, 00:06:24.596 "bdev_auto_examine": true, 00:06:24.596 "iobuf_small_cache_size": 128, 00:06:24.596 "iobuf_large_cache_size": 16 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "bdev_raid_set_options", 00:06:24.596 "params": { 00:06:24.596 "process_window_size_kb": 1024 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "bdev_iscsi_set_options", 00:06:24.596 "params": { 00:06:24.596 "timeout_sec": 30 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "bdev_nvme_set_options", 00:06:24.596 "params": { 00:06:24.596 "action_on_timeout": "none", 00:06:24.596 "timeout_us": 0, 00:06:24.596 "timeout_admin_us": 0, 00:06:24.596 "keep_alive_timeout_ms": 10000, 00:06:24.596 "arbitration_burst": 0, 00:06:24.596 "low_priority_weight": 0, 00:06:24.596 "medium_priority_weight": 0, 00:06:24.596 "high_priority_weight": 0, 00:06:24.596 "nvme_adminq_poll_period_us": 10000, 00:06:24.596 "nvme_ioq_poll_period_us": 0, 00:06:24.596 "io_queue_requests": 0, 00:06:24.596 "delay_cmd_submit": true, 00:06:24.596 "transport_retry_count": 4, 00:06:24.596 "bdev_retry_count": 3, 00:06:24.596 "transport_ack_timeout": 0, 00:06:24.596 "ctrlr_loss_timeout_sec": 0, 00:06:24.596 "reconnect_delay_sec": 0, 00:06:24.596 "fast_io_fail_timeout_sec": 0, 00:06:24.596 "disable_auto_failback": false, 00:06:24.596 "generate_uuids": false, 00:06:24.596 "transport_tos": 0, 00:06:24.596 "nvme_error_stat": false, 00:06:24.596 "rdma_srq_size": 0, 00:06:24.596 "io_path_stat": false, 00:06:24.596 "allow_accel_sequence": false, 00:06:24.596 "rdma_max_cq_size": 0, 00:06:24.596 "rdma_cm_event_timeout_ms": 0, 00:06:24.596 "dhchap_digests": [ 00:06:24.596 "sha256", 00:06:24.596 "sha384", 00:06:24.596 "sha512" 00:06:24.596 ], 00:06:24.596 "dhchap_dhgroups": [ 00:06:24.596 "null", 00:06:24.596 "ffdhe2048", 00:06:24.596 "ffdhe3072", 00:06:24.596 "ffdhe4096", 00:06:24.596 "ffdhe6144", 00:06:24.596 "ffdhe8192" 00:06:24.596 ] 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "bdev_nvme_set_hotplug", 00:06:24.596 "params": { 00:06:24.596 "period_us": 100000, 00:06:24.596 "enable": false 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "bdev_wait_for_examine" 00:06:24.596 } 00:06:24.596 ] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "scsi", 00:06:24.596 "config": null 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "scheduler", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "framework_set_scheduler", 00:06:24.596 "params": { 00:06:24.596 "name": "static" 00:06:24.596 } 00:06:24.596 } 00:06:24.596 ] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "vhost_scsi", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "vhost_blk", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "ublk", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "nbd", 00:06:24.596 "config": [] 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "subsystem": "nvmf", 00:06:24.596 "config": [ 00:06:24.596 { 00:06:24.596 "method": "nvmf_set_config", 00:06:24.596 "params": { 00:06:24.596 "discovery_filter": "match_any", 00:06:24.596 "admin_cmd_passthru": { 00:06:24.596 "identify_ctrlr": false 00:06:24.596 } 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "nvmf_set_max_subsystems", 00:06:24.596 "params": { 00:06:24.596 "max_subsystems": 1024 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "nvmf_set_crdt", 00:06:24.596 "params": { 00:06:24.596 "crdt1": 0, 00:06:24.596 "crdt2": 0, 00:06:24.596 "crdt3": 0 00:06:24.596 } 00:06:24.596 }, 00:06:24.596 { 00:06:24.596 "method": "nvmf_create_transport", 00:06:24.596 "params": { 00:06:24.596 "trtype": "TCP", 00:06:24.596 "max_queue_depth": 128, 00:06:24.596 "max_io_qpairs_per_ctrlr": 127, 00:06:24.596 "in_capsule_data_size": 4096, 00:06:24.596 "max_io_size": 131072, 00:06:24.596 "io_unit_size": 131072, 00:06:24.596 "max_aq_depth": 128, 00:06:24.596 "num_shared_buffers": 511, 00:06:24.596 "buf_cache_size": 4294967295, 00:06:24.596 "dif_insert_or_strip": false, 00:06:24.596 "zcopy": false, 00:06:24.596 "c2h_success": true, 00:06:24.596 "sock_priority": 0, 00:06:24.596 "abort_timeout_sec": 1, 00:06:24.596 "ack_timeout": 0, 00:06:24.596 "data_wr_pool_size": 0 00:06:24.596 } 00:06:24.597 } 00:06:24.597 ] 00:06:24.597 }, 00:06:24.597 { 00:06:24.597 "subsystem": "iscsi", 00:06:24.597 "config": [ 00:06:24.597 { 00:06:24.597 "method": "iscsi_set_options", 00:06:24.597 "params": { 00:06:24.597 "node_base": "iqn.2016-06.io.spdk", 00:06:24.597 "max_sessions": 128, 00:06:24.597 "max_connections_per_session": 2, 00:06:24.597 "max_queue_depth": 64, 00:06:24.597 "default_time2wait": 2, 00:06:24.597 "default_time2retain": 20, 00:06:24.597 "first_burst_length": 8192, 00:06:24.597 "immediate_data": true, 00:06:24.597 "allow_duplicated_isid": false, 00:06:24.597 "error_recovery_level": 0, 00:06:24.597 "nop_timeout": 60, 00:06:24.597 "nop_in_interval": 30, 00:06:24.597 "disable_chap": false, 00:06:24.597 "require_chap": false, 00:06:24.597 "mutual_chap": false, 00:06:24.597 "chap_group": 0, 00:06:24.597 "max_large_datain_per_connection": 64, 00:06:24.597 "max_r2t_per_connection": 4, 00:06:24.597 "pdu_pool_size": 36864, 00:06:24.597 "immediate_data_pool_size": 16384, 00:06:24.597 "data_out_pool_size": 2048 00:06:24.597 } 00:06:24.597 } 00:06:24.597 ] 00:06:24.597 } 00:06:24.597 ] 00:06:24.597 } 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 75917 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 75917 ']' 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 75917 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75917 00:06:24.597 killing process with pid 75917 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75917' 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 75917 00:06:24.597 17:12:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 75917 00:06:25.163 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=75945 00:06:25.163 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:25.163 17:12:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 75945 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 75945 ']' 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 75945 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75945 00:06:30.493 killing process with pid 75945 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75945' 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 75945 00:06:30.493 17:12:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 75945 00:06:30.493 17:12:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:30.493 17:12:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:30.493 ************************************ 00:06:30.493 END TEST skip_rpc_with_json 00:06:30.493 ************************************ 00:06:30.493 00:06:30.493 real 0m7.302s 00:06:30.493 user 0m6.648s 00:06:30.493 sys 0m0.962s 00:06:30.493 17:12:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.493 17:12:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.751 17:12:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.751 ************************************ 00:06:30.751 START TEST skip_rpc_with_delay 00:06:30.751 ************************************ 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.751 [2024-07-15 17:12:41.498009] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:30.751 [2024-07-15 17:12:41.498212] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.751 00:06:30.751 real 0m0.180s 00:06:30.751 user 0m0.091s 00:06:30.751 sys 0m0.088s 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.751 ************************************ 00:06:30.751 END TEST skip_rpc_with_delay 00:06:30.751 ************************************ 00:06:30.751 17:12:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:30.751 17:12:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:30.751 17:12:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:30.751 17:12:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.751 17:12:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.010 ************************************ 00:06:31.010 START TEST exit_on_failed_rpc_init 00:06:31.010 ************************************ 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=76059 00:06:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 76059 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 76059 ']' 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.010 17:12:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.010 [2024-07-15 17:12:41.735742] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:31.010 [2024-07-15 17:12:41.735952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76059 ] 00:06:31.268 [2024-07-15 17:12:41.890781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.268 [2024-07-15 17:12:41.911635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.268 [2024-07-15 17:12:42.013145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:32.216 17:12:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:32.216 [2024-07-15 17:12:42.812479] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:32.216 [2024-07-15 17:12:42.812685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76077 ] 00:06:32.216 [2024-07-15 17:12:42.959448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.216 [2024-07-15 17:12:42.984822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.474 [2024-07-15 17:12:43.089761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.474 [2024-07-15 17:12:43.089930] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:32.474 [2024-07-15 17:12:43.089974] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:32.474 [2024-07-15 17:12:43.089998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 76059 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 76059 ']' 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 76059 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76059 00:06:32.474 killing process with pid 76059 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76059' 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 76059 00:06:32.474 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 76059 00:06:33.040 ************************************ 00:06:33.040 END TEST exit_on_failed_rpc_init 00:06:33.040 ************************************ 00:06:33.040 00:06:33.040 real 0m2.099s 00:06:33.040 user 0m2.390s 00:06:33.040 sys 0m0.613s 00:06:33.040 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.040 17:12:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.040 17:12:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.040 17:12:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.040 ************************************ 00:06:33.041 END TEST skip_rpc 00:06:33.041 ************************************ 00:06:33.041 00:06:33.041 real 0m15.571s 00:06:33.041 user 0m14.439s 00:06:33.041 sys 0m2.219s 00:06:33.041 17:12:43 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.041 17:12:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.041 17:12:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.041 17:12:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:33.041 17:12:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.041 17:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.041 17:12:43 -- common/autotest_common.sh@10 -- # set +x 00:06:33.041 ************************************ 00:06:33.041 START TEST rpc_client 00:06:33.041 ************************************ 00:06:33.041 17:12:43 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:33.041 * Looking for test storage... 00:06:33.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:33.041 17:12:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:33.298 OK 00:06:33.298 17:12:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:33.298 00:06:33.298 real 0m0.133s 00:06:33.298 user 0m0.062s 00:06:33.298 sys 0m0.074s 00:06:33.298 17:12:43 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.298 17:12:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:33.298 ************************************ 00:06:33.298 END TEST rpc_client 00:06:33.298 ************************************ 00:06:33.298 17:12:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.298 17:12:43 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:33.298 17:12:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.298 17:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.298 17:12:43 -- common/autotest_common.sh@10 -- # set +x 00:06:33.298 ************************************ 00:06:33.298 START TEST json_config 00:06:33.298 ************************************ 00:06:33.298 17:12:43 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:33.298 17:12:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.298 17:12:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bf447e3-99af-44cc-8bbd-f884be8c0416 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7bf447e3-99af-44cc-8bbd-f884be8c0416 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.299 17:12:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.299 17:12:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.299 17:12:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.299 17:12:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.299 17:12:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.299 17:12:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.299 17:12:44 json_config -- paths/export.sh@5 -- # export PATH 00:06:33.299 17:12:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@47 -- # : 0 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.299 17:12:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:33.299 WARNING: No tests are enabled so not running JSON configuration tests 00:06:33.299 17:12:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:33.299 00:06:33.299 real 0m0.079s 00:06:33.299 user 0m0.034s 00:06:33.299 sys 0m0.043s 00:06:33.299 ************************************ 00:06:33.299 END TEST json_config 00:06:33.299 ************************************ 00:06:33.299 17:12:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.299 17:12:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 17:12:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.299 17:12:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:33.299 17:12:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.299 17:12:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.299 17:12:44 -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 ************************************ 00:06:33.299 START TEST json_config_extra_key 00:06:33.299 ************************************ 00:06:33.299 17:12:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:33.557 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.557 17:12:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bf447e3-99af-44cc-8bbd-f884be8c0416 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7bf447e3-99af-44cc-8bbd-f884be8c0416 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.558 17:12:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.558 17:12:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.558 17:12:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.558 17:12:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.558 17:12:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.558 17:12:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.558 17:12:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:33.558 17:12:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.558 17:12:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.558 INFO: launching applications... 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:33.558 17:12:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:33.558 Waiting for target to run... 00:06:33.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=76236 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 76236 /var/tmp/spdk_tgt.sock 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 76236 ']' 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.558 17:12:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.558 17:12:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.558 [2024-07-15 17:12:44.299511] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:33.558 [2024-07-15 17:12:44.299723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76236 ] 00:06:34.124 [2024-07-15 17:12:44.751690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.124 [2024-07-15 17:12:44.775732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.124 [2024-07-15 17:12:44.852117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.381 00:06:34.381 INFO: shutting down applications... 00:06:34.381 17:12:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.381 17:12:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:34.381 17:12:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:34.381 17:12:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 76236 ]] 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 76236 00:06:34.381 17:12:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.382 17:12:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.382 17:12:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 76236 00:06:34.382 17:12:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 76236 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.948 17:12:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.948 SPDK target shutdown done 00:06:34.948 17:12:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:34.948 Success 00:06:34.948 00:06:34.948 real 0m1.601s 00:06:34.948 user 0m1.422s 00:06:34.948 sys 0m0.535s 00:06:34.948 17:12:45 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.948 17:12:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.948 ************************************ 00:06:34.948 END TEST json_config_extra_key 00:06:34.948 ************************************ 00:06:34.948 17:12:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.948 17:12:45 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.948 17:12:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.948 17:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.948 17:12:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.948 ************************************ 00:06:34.948 START TEST alias_rpc 00:06:34.948 ************************************ 00:06:34.948 17:12:45 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.205 * Looking for test storage... 00:06:35.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:35.206 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.206 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=76301 00:06:35.206 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 76301 00:06:35.206 17:12:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 76301 ']' 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.206 17:12:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.206 [2024-07-15 17:12:45.952589] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:35.206 [2024-07-15 17:12:45.953077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ] 00:06:35.463 [2024-07-15 17:12:46.107552] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.463 [2024-07-15 17:12:46.129820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.463 [2024-07-15 17:12:46.235526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.400 17:12:46 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.400 17:12:46 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.400 17:12:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:36.400 17:12:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 76301 00:06:36.400 17:12:47 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 76301 ']' 00:06:36.400 17:12:47 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 76301 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76301 00:06:36.401 killing process with pid 76301 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76301' 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@967 -- # kill 76301 00:06:36.401 17:12:47 alias_rpc -- common/autotest_common.sh@972 -- # wait 76301 00:06:36.975 ************************************ 00:06:36.975 END TEST alias_rpc 00:06:36.975 ************************************ 00:06:36.975 00:06:36.975 real 0m1.885s 00:06:36.975 user 0m2.043s 00:06:36.975 sys 0m0.538s 00:06:36.975 17:12:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.975 17:12:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.975 17:12:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.975 17:12:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:36.975 17:12:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:36.975 17:12:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.975 17:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.975 17:12:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.975 ************************************ 00:06:36.975 START TEST spdkcli_tcp 00:06:36.975 ************************************ 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:36.975 * Looking for test storage... 00:06:36.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=76378 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 76378 00:06:36.975 17:12:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 76378 ']' 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.975 17:12:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.233 [2024-07-15 17:12:47.919172] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:37.233 [2024-07-15 17:12:47.919353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76378 ] 00:06:37.233 [2024-07-15 17:12:48.063841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.233 [2024-07-15 17:12:48.082410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.491 [2024-07-15 17:12:48.183227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.491 [2024-07-15 17:12:48.183277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.056 17:12:48 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.056 17:12:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:38.056 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=76395 00:06:38.056 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:38.056 17:12:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:38.622 [ 00:06:38.622 "bdev_malloc_delete", 00:06:38.622 "bdev_malloc_create", 00:06:38.622 "bdev_null_resize", 00:06:38.623 "bdev_null_delete", 00:06:38.623 "bdev_null_create", 00:06:38.623 "bdev_nvme_cuse_unregister", 00:06:38.623 "bdev_nvme_cuse_register", 00:06:38.623 "bdev_opal_new_user", 00:06:38.623 "bdev_opal_set_lock_state", 00:06:38.623 "bdev_opal_delete", 00:06:38.623 "bdev_opal_get_info", 00:06:38.623 "bdev_opal_create", 00:06:38.623 "bdev_nvme_opal_revert", 00:06:38.623 "bdev_nvme_opal_init", 00:06:38.623 "bdev_nvme_send_cmd", 00:06:38.623 "bdev_nvme_get_path_iostat", 00:06:38.623 "bdev_nvme_get_mdns_discovery_info", 00:06:38.623 "bdev_nvme_stop_mdns_discovery", 00:06:38.623 "bdev_nvme_start_mdns_discovery", 00:06:38.623 "bdev_nvme_set_multipath_policy", 00:06:38.623 "bdev_nvme_set_preferred_path", 00:06:38.623 "bdev_nvme_get_io_paths", 00:06:38.623 "bdev_nvme_remove_error_injection", 00:06:38.623 "bdev_nvme_add_error_injection", 00:06:38.623 "bdev_nvme_get_discovery_info", 00:06:38.623 "bdev_nvme_stop_discovery", 00:06:38.623 "bdev_nvme_start_discovery", 00:06:38.623 "bdev_nvme_get_controller_health_info", 00:06:38.623 "bdev_nvme_disable_controller", 00:06:38.623 "bdev_nvme_enable_controller", 00:06:38.623 "bdev_nvme_reset_controller", 00:06:38.623 "bdev_nvme_get_transport_statistics", 00:06:38.623 "bdev_nvme_apply_firmware", 00:06:38.623 "bdev_nvme_detach_controller", 00:06:38.623 "bdev_nvme_get_controllers", 00:06:38.623 "bdev_nvme_attach_controller", 00:06:38.623 "bdev_nvme_set_hotplug", 00:06:38.623 "bdev_nvme_set_options", 00:06:38.623 "bdev_passthru_delete", 00:06:38.623 "bdev_passthru_create", 00:06:38.623 "bdev_lvol_set_parent_bdev", 00:06:38.623 "bdev_lvol_set_parent", 00:06:38.623 "bdev_lvol_check_shallow_copy", 00:06:38.623 "bdev_lvol_start_shallow_copy", 00:06:38.623 "bdev_lvol_grow_lvstore", 00:06:38.623 "bdev_lvol_get_lvols", 00:06:38.623 "bdev_lvol_get_lvstores", 00:06:38.623 "bdev_lvol_delete", 00:06:38.623 "bdev_lvol_set_read_only", 00:06:38.623 "bdev_lvol_resize", 00:06:38.623 "bdev_lvol_decouple_parent", 00:06:38.623 "bdev_lvol_inflate", 00:06:38.623 "bdev_lvol_rename", 00:06:38.623 "bdev_lvol_clone_bdev", 00:06:38.623 "bdev_lvol_clone", 00:06:38.623 "bdev_lvol_snapshot", 00:06:38.623 "bdev_lvol_create", 00:06:38.623 "bdev_lvol_delete_lvstore", 00:06:38.623 "bdev_lvol_rename_lvstore", 00:06:38.623 "bdev_lvol_create_lvstore", 00:06:38.623 "bdev_raid_set_options", 00:06:38.623 "bdev_raid_remove_base_bdev", 00:06:38.623 "bdev_raid_add_base_bdev", 00:06:38.623 "bdev_raid_delete", 00:06:38.623 "bdev_raid_create", 00:06:38.623 "bdev_raid_get_bdevs", 00:06:38.623 "bdev_error_inject_error", 00:06:38.623 "bdev_error_delete", 00:06:38.623 "bdev_error_create", 00:06:38.623 "bdev_split_delete", 00:06:38.623 "bdev_split_create", 00:06:38.623 "bdev_delay_delete", 00:06:38.623 "bdev_delay_create", 00:06:38.623 "bdev_delay_update_latency", 00:06:38.623 "bdev_zone_block_delete", 00:06:38.623 "bdev_zone_block_create", 00:06:38.623 "blobfs_create", 00:06:38.623 "blobfs_detect", 00:06:38.623 "blobfs_set_cache_size", 00:06:38.623 "bdev_xnvme_delete", 00:06:38.623 "bdev_xnvme_create", 00:06:38.623 "bdev_aio_delete", 00:06:38.623 "bdev_aio_rescan", 00:06:38.623 "bdev_aio_create", 00:06:38.623 "bdev_ftl_set_property", 00:06:38.623 "bdev_ftl_get_properties", 00:06:38.623 "bdev_ftl_get_stats", 00:06:38.623 "bdev_ftl_unmap", 00:06:38.623 "bdev_ftl_unload", 00:06:38.623 "bdev_ftl_delete", 00:06:38.623 "bdev_ftl_load", 00:06:38.623 "bdev_ftl_create", 00:06:38.623 "bdev_virtio_attach_controller", 00:06:38.623 "bdev_virtio_scsi_get_devices", 00:06:38.623 "bdev_virtio_detach_controller", 00:06:38.623 "bdev_virtio_blk_set_hotplug", 00:06:38.623 "bdev_iscsi_delete", 00:06:38.623 "bdev_iscsi_create", 00:06:38.623 "bdev_iscsi_set_options", 00:06:38.623 "accel_error_inject_error", 00:06:38.623 "ioat_scan_accel_module", 00:06:38.623 "dsa_scan_accel_module", 00:06:38.623 "iaa_scan_accel_module", 00:06:38.623 "keyring_file_remove_key", 00:06:38.623 "keyring_file_add_key", 00:06:38.623 "keyring_linux_set_options", 00:06:38.623 "iscsi_get_histogram", 00:06:38.623 "iscsi_enable_histogram", 00:06:38.623 "iscsi_set_options", 00:06:38.623 "iscsi_get_auth_groups", 00:06:38.623 "iscsi_auth_group_remove_secret", 00:06:38.623 "iscsi_auth_group_add_secret", 00:06:38.623 "iscsi_delete_auth_group", 00:06:38.623 "iscsi_create_auth_group", 00:06:38.623 "iscsi_set_discovery_auth", 00:06:38.623 "iscsi_get_options", 00:06:38.623 "iscsi_target_node_request_logout", 00:06:38.623 "iscsi_target_node_set_redirect", 00:06:38.623 "iscsi_target_node_set_auth", 00:06:38.623 "iscsi_target_node_add_lun", 00:06:38.623 "iscsi_get_stats", 00:06:38.623 "iscsi_get_connections", 00:06:38.623 "iscsi_portal_group_set_auth", 00:06:38.623 "iscsi_start_portal_group", 00:06:38.623 "iscsi_delete_portal_group", 00:06:38.623 "iscsi_create_portal_group", 00:06:38.623 "iscsi_get_portal_groups", 00:06:38.623 "iscsi_delete_target_node", 00:06:38.623 "iscsi_target_node_remove_pg_ig_maps", 00:06:38.623 "iscsi_target_node_add_pg_ig_maps", 00:06:38.623 "iscsi_create_target_node", 00:06:38.623 "iscsi_get_target_nodes", 00:06:38.623 "iscsi_delete_initiator_group", 00:06:38.623 "iscsi_initiator_group_remove_initiators", 00:06:38.623 "iscsi_initiator_group_add_initiators", 00:06:38.623 "iscsi_create_initiator_group", 00:06:38.623 "iscsi_get_initiator_groups", 00:06:38.623 "nvmf_set_crdt", 00:06:38.623 "nvmf_set_config", 00:06:38.623 "nvmf_set_max_subsystems", 00:06:38.623 "nvmf_stop_mdns_prr", 00:06:38.623 "nvmf_publish_mdns_prr", 00:06:38.623 "nvmf_subsystem_get_listeners", 00:06:38.623 "nvmf_subsystem_get_qpairs", 00:06:38.624 "nvmf_subsystem_get_controllers", 00:06:38.624 "nvmf_get_stats", 00:06:38.624 "nvmf_get_transports", 00:06:38.624 "nvmf_create_transport", 00:06:38.624 "nvmf_get_targets", 00:06:38.624 "nvmf_delete_target", 00:06:38.624 "nvmf_create_target", 00:06:38.624 "nvmf_subsystem_allow_any_host", 00:06:38.624 "nvmf_subsystem_remove_host", 00:06:38.624 "nvmf_subsystem_add_host", 00:06:38.624 "nvmf_ns_remove_host", 00:06:38.624 "nvmf_ns_add_host", 00:06:38.624 "nvmf_subsystem_remove_ns", 00:06:38.624 "nvmf_subsystem_add_ns", 00:06:38.624 "nvmf_subsystem_listener_set_ana_state", 00:06:38.624 "nvmf_discovery_get_referrals", 00:06:38.624 "nvmf_discovery_remove_referral", 00:06:38.624 "nvmf_discovery_add_referral", 00:06:38.624 "nvmf_subsystem_remove_listener", 00:06:38.624 "nvmf_subsystem_add_listener", 00:06:38.624 "nvmf_delete_subsystem", 00:06:38.624 "nvmf_create_subsystem", 00:06:38.624 "nvmf_get_subsystems", 00:06:38.624 "env_dpdk_get_mem_stats", 00:06:38.624 "nbd_get_disks", 00:06:38.624 "nbd_stop_disk", 00:06:38.624 "nbd_start_disk", 00:06:38.624 "ublk_recover_disk", 00:06:38.624 "ublk_get_disks", 00:06:38.624 "ublk_stop_disk", 00:06:38.624 "ublk_start_disk", 00:06:38.624 "ublk_destroy_target", 00:06:38.624 "ublk_create_target", 00:06:38.624 "virtio_blk_create_transport", 00:06:38.624 "virtio_blk_get_transports", 00:06:38.624 "vhost_controller_set_coalescing", 00:06:38.624 "vhost_get_controllers", 00:06:38.624 "vhost_delete_controller", 00:06:38.624 "vhost_create_blk_controller", 00:06:38.624 "vhost_scsi_controller_remove_target", 00:06:38.624 "vhost_scsi_controller_add_target", 00:06:38.624 "vhost_start_scsi_controller", 00:06:38.624 "vhost_create_scsi_controller", 00:06:38.624 "thread_set_cpumask", 00:06:38.624 "framework_get_governor", 00:06:38.624 "framework_get_scheduler", 00:06:38.624 "framework_set_scheduler", 00:06:38.624 "framework_get_reactors", 00:06:38.624 "thread_get_io_channels", 00:06:38.624 "thread_get_pollers", 00:06:38.624 "thread_get_stats", 00:06:38.624 "framework_monitor_context_switch", 00:06:38.624 "spdk_kill_instance", 00:06:38.624 "log_enable_timestamps", 00:06:38.624 "log_get_flags", 00:06:38.624 "log_clear_flag", 00:06:38.624 "log_set_flag", 00:06:38.624 "log_get_level", 00:06:38.624 "log_set_level", 00:06:38.624 "log_get_print_level", 00:06:38.624 "log_set_print_level", 00:06:38.624 "framework_enable_cpumask_locks", 00:06:38.624 "framework_disable_cpumask_locks", 00:06:38.624 "framework_wait_init", 00:06:38.624 "framework_start_init", 00:06:38.624 "scsi_get_devices", 00:06:38.624 "bdev_get_histogram", 00:06:38.624 "bdev_enable_histogram", 00:06:38.624 "bdev_set_qos_limit", 00:06:38.624 "bdev_set_qd_sampling_period", 00:06:38.624 "bdev_get_bdevs", 00:06:38.624 "bdev_reset_iostat", 00:06:38.624 "bdev_get_iostat", 00:06:38.624 "bdev_examine", 00:06:38.624 "bdev_wait_for_examine", 00:06:38.624 "bdev_set_options", 00:06:38.624 "notify_get_notifications", 00:06:38.624 "notify_get_types", 00:06:38.624 "accel_get_stats", 00:06:38.624 "accel_set_options", 00:06:38.624 "accel_set_driver", 00:06:38.624 "accel_crypto_key_destroy", 00:06:38.624 "accel_crypto_keys_get", 00:06:38.624 "accel_crypto_key_create", 00:06:38.624 "accel_assign_opc", 00:06:38.624 "accel_get_module_info", 00:06:38.624 "accel_get_opc_assignments", 00:06:38.624 "vmd_rescan", 00:06:38.624 "vmd_remove_device", 00:06:38.624 "vmd_enable", 00:06:38.624 "sock_get_default_impl", 00:06:38.624 "sock_set_default_impl", 00:06:38.624 "sock_impl_set_options", 00:06:38.624 "sock_impl_get_options", 00:06:38.624 "iobuf_get_stats", 00:06:38.624 "iobuf_set_options", 00:06:38.624 "framework_get_pci_devices", 00:06:38.624 "framework_get_config", 00:06:38.624 "framework_get_subsystems", 00:06:38.624 "trace_get_info", 00:06:38.624 "trace_get_tpoint_group_mask", 00:06:38.624 "trace_disable_tpoint_group", 00:06:38.624 "trace_enable_tpoint_group", 00:06:38.624 "trace_clear_tpoint_mask", 00:06:38.624 "trace_set_tpoint_mask", 00:06:38.624 "keyring_get_keys", 00:06:38.624 "spdk_get_version", 00:06:38.624 "rpc_get_methods" 00:06:38.624 ] 00:06:38.624 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.624 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:38.624 17:12:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 76378 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 76378 ']' 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 76378 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76378 00:06:38.624 killing process with pid 76378 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76378' 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 76378 00:06:38.624 17:12:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 76378 00:06:38.883 ************************************ 00:06:38.883 END TEST spdkcli_tcp 00:06:38.883 ************************************ 00:06:38.883 00:06:38.883 real 0m2.020s 00:06:38.883 user 0m3.659s 00:06:38.883 sys 0m0.590s 00:06:38.883 17:12:49 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.883 17:12:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.140 17:12:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.140 17:12:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.140 17:12:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.140 17:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.140 17:12:49 -- common/autotest_common.sh@10 -- # set +x 00:06:39.140 ************************************ 00:06:39.140 START TEST dpdk_mem_utility 00:06:39.140 ************************************ 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.140 * Looking for test storage... 00:06:39.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:39.140 17:12:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:39.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.140 17:12:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=76470 00:06:39.140 17:12:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 76470 00:06:39.140 17:12:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 76470 ']' 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.140 17:12:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.140 [2024-07-15 17:12:49.943278] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:39.140 [2024-07-15 17:12:49.943493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76470 ] 00:06:39.397 [2024-07-15 17:12:50.087115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.397 [2024-07-15 17:12:50.107511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.397 [2024-07-15 17:12:50.207338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.330 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.330 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:40.330 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.330 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.330 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.330 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.330 { 00:06:40.330 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.330 } 00:06:40.330 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.330 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:40.330 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:40.330 1 heaps totaling size 814.000000 MiB 00:06:40.330 size: 814.000000 MiB heap id: 0 00:06:40.330 end heaps---------- 00:06:40.330 8 mempools totaling size 598.116089 MiB 00:06:40.330 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.330 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.330 size: 84.521057 MiB name: bdev_io_76470 00:06:40.330 size: 51.011292 MiB name: evtpool_76470 00:06:40.330 size: 50.003479 MiB name: msgpool_76470 00:06:40.330 size: 21.763794 MiB name: PDU_Pool 00:06:40.330 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.330 size: 0.026123 MiB name: Session_Pool 00:06:40.330 end mempools------- 00:06:40.330 6 memzones totaling size 4.142822 MiB 00:06:40.330 size: 1.000366 MiB name: RG_ring_0_76470 00:06:40.330 size: 1.000366 MiB name: RG_ring_1_76470 00:06:40.330 size: 1.000366 MiB name: RG_ring_4_76470 00:06:40.330 size: 1.000366 MiB name: RG_ring_5_76470 00:06:40.330 size: 0.125366 MiB name: RG_ring_2_76470 00:06:40.330 size: 0.015991 MiB name: RG_ring_3_76470 00:06:40.330 end memzones------- 00:06:40.330 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.330 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:06:40.330 list of free elements. size: 12.471558 MiB 00:06:40.330 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:40.330 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:40.330 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:40.330 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:40.330 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:40.330 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:40.330 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:40.330 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:40.330 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:40.330 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:06:40.330 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:40.330 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:40.330 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:40.330 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:40.330 element at address: 0x200003a00000 with size: 0.348572 MiB 00:06:40.330 list of standard malloc elements. size: 199.265869 MiB 00:06:40.330 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:40.330 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:40.330 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:40.330 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:40.330 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:40.330 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:40.330 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:40.330 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:40.330 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:40.330 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:40.330 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:40.331 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:40.332 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:40.332 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:40.333 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:40.333 list of memzone associated elements. size: 602.262573 MiB 00:06:40.333 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:40.333 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.333 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:40.333 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.333 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:40.333 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_76470_0 00:06:40.333 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:40.333 associated memzone info: size: 48.002930 MiB name: MP_evtpool_76470_0 00:06:40.333 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:40.333 associated memzone info: size: 48.002930 MiB name: MP_msgpool_76470_0 00:06:40.333 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:40.333 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.333 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:40.333 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.333 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:40.333 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_76470 00:06:40.333 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:40.333 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_76470 00:06:40.333 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:40.333 associated memzone info: size: 1.007996 MiB name: MP_evtpool_76470 00:06:40.333 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:40.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.333 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:40.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.333 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:40.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.333 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:40.333 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.333 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:40.333 associated memzone info: size: 1.000366 MiB name: RG_ring_0_76470 00:06:40.333 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:40.333 associated memzone info: size: 1.000366 MiB name: RG_ring_1_76470 00:06:40.333 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:40.333 associated memzone info: size: 1.000366 MiB name: RG_ring_4_76470 00:06:40.333 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:40.333 associated memzone info: size: 1.000366 MiB name: RG_ring_5_76470 00:06:40.333 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:40.333 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_76470 00:06:40.334 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:40.334 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.334 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:40.334 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.334 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:40.334 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.334 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:40.334 associated memzone info: size: 0.125366 MiB name: RG_ring_2_76470 00:06:40.334 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:40.334 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.334 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:40.334 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.334 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:40.334 associated memzone info: size: 0.015991 MiB name: RG_ring_3_76470 00:06:40.334 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:40.334 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.334 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:40.334 associated memzone info: size: 0.000183 MiB name: MP_msgpool_76470 00:06:40.334 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:40.334 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_76470 00:06:40.334 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:40.334 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.334 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.334 17:12:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 76470 00:06:40.334 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 76470 ']' 00:06:40.334 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 76470 00:06:40.334 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:40.334 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.334 17:12:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76470 00:06:40.334 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.334 killing process with pid 76470 00:06:40.334 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.334 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76470' 00:06:40.334 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 76470 00:06:40.334 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 76470 00:06:40.897 00:06:40.897 real 0m1.701s 00:06:40.897 user 0m1.713s 00:06:40.897 sys 0m0.520s 00:06:40.897 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.897 ************************************ 00:06:40.897 17:12:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.897 END TEST dpdk_mem_utility 00:06:40.897 ************************************ 00:06:40.897 17:12:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.897 17:12:51 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:40.897 17:12:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.897 17:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.897 17:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:40.897 ************************************ 00:06:40.897 START TEST event 00:06:40.897 ************************************ 00:06:40.897 17:12:51 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:40.897 * Looking for test storage... 00:06:40.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:40.897 17:12:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.897 17:12:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.897 17:12:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:40.897 17:12:51 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:40.897 17:12:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.897 17:12:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.897 ************************************ 00:06:40.897 START TEST event_perf 00:06:40.897 ************************************ 00:06:40.897 17:12:51 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:40.897 Running I/O for 1 seconds...[2024-07-15 17:12:51.642211] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:40.897 [2024-07-15 17:12:51.642963] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76548 ] 00:06:41.155 [2024-07-15 17:12:51.787492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.156 [2024-07-15 17:12:51.803672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.156 [2024-07-15 17:12:51.906518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.156 [2024-07-15 17:12:51.906610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.156 [2024-07-15 17:12:51.906642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.156 Running I/O for 1 seconds...[2024-07-15 17:12:51.906676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.530 00:06:42.530 lcore 0: 186417 00:06:42.530 lcore 1: 186417 00:06:42.530 lcore 2: 186417 00:06:42.530 lcore 3: 186415 00:06:42.530 done. 00:06:42.530 00:06:42.530 real 0m1.407s 00:06:42.530 user 0m4.169s 00:06:42.530 sys 0m0.113s 00:06:42.530 17:12:53 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.530 17:12:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.530 ************************************ 00:06:42.530 END TEST event_perf 00:06:42.530 ************************************ 00:06:42.530 17:12:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.530 17:12:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:42.530 17:12:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.530 17:12:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.530 17:12:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.530 ************************************ 00:06:42.530 START TEST event_reactor 00:06:42.530 ************************************ 00:06:42.530 17:12:53 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:42.530 [2024-07-15 17:12:53.101162] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:42.530 [2024-07-15 17:12:53.101336] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76582 ] 00:06:42.530 [2024-07-15 17:12:53.243656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.530 [2024-07-15 17:12:53.265427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.530 [2024-07-15 17:12:53.369010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.899 test_start 00:06:43.899 oneshot 00:06:43.899 tick 100 00:06:43.899 tick 100 00:06:43.899 tick 250 00:06:43.899 tick 100 00:06:43.899 tick 100 00:06:43.899 tick 100 00:06:43.899 tick 250 00:06:43.899 tick 500 00:06:43.899 tick 100 00:06:43.899 tick 100 00:06:43.899 tick 250 00:06:43.899 tick 100 00:06:43.899 tick 100 00:06:43.899 test_end 00:06:43.899 00:06:43.899 real 0m1.399s 00:06:43.899 user 0m1.196s 00:06:43.899 sys 0m0.096s 00:06:43.899 17:12:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.899 ************************************ 00:06:43.899 END TEST event_reactor 00:06:43.899 17:12:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 ************************************ 00:06:43.899 17:12:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.899 17:12:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.899 17:12:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.899 17:12:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.899 17:12:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.899 ************************************ 00:06:43.899 START TEST event_reactor_perf 00:06:43.899 ************************************ 00:06:43.899 17:12:54 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.899 [2024-07-15 17:12:54.549167] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:43.899 [2024-07-15 17:12:54.549393] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76625 ] 00:06:43.899 [2024-07-15 17:12:54.701096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.899 [2024-07-15 17:12:54.721849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.156 [2024-07-15 17:12:54.821878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.087 test_start 00:06:45.087 test_end 00:06:45.087 Performance: 281775 events per second 00:06:45.087 ************************************ 00:06:45.087 END TEST event_reactor_perf 00:06:45.087 ************************************ 00:06:45.087 00:06:45.087 real 0m1.405s 00:06:45.087 user 0m1.192s 00:06:45.087 sys 0m0.105s 00:06:45.087 17:12:55 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.087 17:12:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.344 17:12:55 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.344 17:12:55 event -- event/event.sh@49 -- # uname -s 00:06:45.344 17:12:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.344 17:12:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.344 17:12:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.344 17:12:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.344 17:12:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.344 ************************************ 00:06:45.344 START TEST event_scheduler 00:06:45.344 ************************************ 00:06:45.344 17:12:55 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.344 * Looking for test storage... 00:06:45.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:45.344 17:12:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.344 17:12:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=76682 00:06:45.344 17:12:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.344 17:12:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 76682 00:06:45.344 17:12:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 76682 ']' 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.344 17:12:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.600 [2024-07-15 17:12:56.201903] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:45.600 [2024-07-15 17:12:56.202246] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76682 ] 00:06:45.600 [2024-07-15 17:12:56.360297] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.600 [2024-07-15 17:12:56.378026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.857 [2024-07-15 17:12:56.478293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.858 [2024-07-15 17:12:56.478472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.858 [2024-07-15 17:12:56.478595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.858 [2024-07-15 17:12:56.478532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:46.441 17:12:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.441 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.441 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.441 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.441 POWER: Cannot set governor of lcore 0 to performance 00:06:46.441 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.441 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.441 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.441 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.441 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:46.441 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:46.441 POWER: Unable to set Power Management Environment for lcore 0 00:06:46.441 [2024-07-15 17:12:57.148706] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:46.441 [2024-07-15 17:12:57.148734] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:46.441 [2024-07-15 17:12:57.148770] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.441 [2024-07-15 17:12:57.148794] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.441 [2024-07-15 17:12:57.148809] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.441 [2024-07-15 17:12:57.148840] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.441 17:12:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.441 [2024-07-15 17:12:57.241738] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.441 17:12:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.441 17:12:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.441 ************************************ 00:06:46.441 START TEST scheduler_create_thread 00:06:46.441 ************************************ 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.441 2 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.441 3 00:06:46.441 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.442 4 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.442 5 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.442 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 6 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 7 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 8 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 9 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 10 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.699 17:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.071 17:12:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.071 17:12:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.071 17:12:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.071 17:12:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.071 17:12:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.441 17:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.441 00:06:49.441 real 0m2.614s 00:06:49.441 user 0m0.017s 00:06:49.441 sys 0m0.003s 00:06:49.441 17:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.441 17:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.441 ************************************ 00:06:49.441 END TEST scheduler_create_thread 00:06:49.441 ************************************ 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:49.441 17:12:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.441 17:12:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 76682 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 76682 ']' 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 76682 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76682 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:49.441 killing process with pid 76682 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76682' 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 76682 00:06:49.441 17:12:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 76682 00:06:49.698 [2024-07-15 17:13:00.347229] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:49.955 ************************************ 00:06:49.955 END TEST event_scheduler 00:06:49.955 ************************************ 00:06:49.955 00:06:49.955 real 0m4.653s 00:06:49.955 user 0m8.457s 00:06:49.955 sys 0m0.480s 00:06:49.955 17:13:00 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.955 17:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.955 17:13:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:49.955 17:13:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:49.955 17:13:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:49.955 17:13:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.955 17:13:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.955 17:13:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.955 ************************************ 00:06:49.955 START TEST app_repeat 00:06:49.955 ************************************ 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=76788 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.955 Process app_repeat pid: 76788 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 76788' 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.955 spdk_app_start Round 0 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:49.955 17:13:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76788 /var/tmp/spdk-nbd.sock 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76788 ']' 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.955 17:13:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.955 [2024-07-15 17:13:00.739809] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:49.955 [2024-07-15 17:13:00.740011] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76788 ] 00:06:50.239 [2024-07-15 17:13:00.893012] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.239 [2024-07-15 17:13:00.916096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.239 [2024-07-15 17:13:01.018765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.239 [2024-07-15 17:13:01.018807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.179 17:13:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.179 17:13:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.179 17:13:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.179 Malloc0 00:06:51.179 17:13:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.435 Malloc1 00:06:51.435 17:13:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.435 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.436 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.436 17:13:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.693 /dev/nbd0 00:06:51.693 17:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.693 17:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.693 1+0 records in 00:06:51.693 1+0 records out 00:06:51.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270523 s, 15.1 MB/s 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.693 17:13:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.693 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.693 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.693 17:13:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.258 /dev/nbd1 00:06:52.258 17:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.258 17:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.258 17:13:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:52.258 17:13:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.258 17:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.259 1+0 records in 00:06:52.259 1+0 records out 00:06:52.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413789 s, 9.9 MB/s 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.259 17:13:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.259 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.259 17:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.259 17:13:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.259 17:13:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.259 17:13:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.259 17:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.259 { 00:06:52.259 "nbd_device": "/dev/nbd0", 00:06:52.259 "bdev_name": "Malloc0" 00:06:52.259 }, 00:06:52.259 { 00:06:52.259 "nbd_device": "/dev/nbd1", 00:06:52.259 "bdev_name": "Malloc1" 00:06:52.259 } 00:06:52.259 ]' 00:06:52.259 17:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.259 { 00:06:52.259 "nbd_device": "/dev/nbd0", 00:06:52.259 "bdev_name": "Malloc0" 00:06:52.259 }, 00:06:52.259 { 00:06:52.259 "nbd_device": "/dev/nbd1", 00:06:52.259 "bdev_name": "Malloc1" 00:06:52.259 } 00:06:52.259 ]' 00:06:52.259 17:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.530 17:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.531 /dev/nbd1' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.531 /dev/nbd1' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.531 256+0 records in 00:06:52.531 256+0 records out 00:06:52.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719033 s, 146 MB/s 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.531 256+0 records in 00:06:52.531 256+0 records out 00:06:52.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029683 s, 35.3 MB/s 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.531 256+0 records in 00:06:52.531 256+0 records out 00:06:52.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261681 s, 40.1 MB/s 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.531 17:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.788 17:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.045 17:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.302 17:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.302 17:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.302 17:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.302 17:13:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.303 17:13:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.560 17:13:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.818 [2024-07-15 17:13:04.518470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.818 [2024-07-15 17:13:04.617614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.818 [2024-07-15 17:13:04.617623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.818 [2024-07-15 17:13:04.673477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.818 [2024-07-15 17:13:04.673588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.160 17:13:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.160 spdk_app_start Round 1 00:06:57.160 17:13:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.160 17:13:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76788 /var/tmp/spdk-nbd.sock 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76788 ']' 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.160 17:13:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:57.160 17:13:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.160 Malloc0 00:06:57.160 17:13:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.418 Malloc1 00:06:57.418 17:13:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.418 17:13:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.419 17:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.677 /dev/nbd0 00:06:57.677 17:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.677 17:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.677 1+0 records in 00:06:57.677 1+0 records out 00:06:57.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397749 s, 10.3 MB/s 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:57.677 17:13:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:57.677 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.677 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.677 17:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.935 /dev/nbd1 00:06:57.935 17:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.935 17:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.935 17:13:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:57.935 17:13:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.936 1+0 records in 00:06:57.936 1+0 records out 00:06:57.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045107 s, 9.1 MB/s 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:57.936 17:13:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:57.936 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.936 17:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.936 17:13:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.936 17:13:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.936 17:13:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.194 { 00:06:58.194 "nbd_device": "/dev/nbd0", 00:06:58.194 "bdev_name": "Malloc0" 00:06:58.194 }, 00:06:58.194 { 00:06:58.194 "nbd_device": "/dev/nbd1", 00:06:58.194 "bdev_name": "Malloc1" 00:06:58.194 } 00:06:58.194 ]' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.194 { 00:06:58.194 "nbd_device": "/dev/nbd0", 00:06:58.194 "bdev_name": "Malloc0" 00:06:58.194 }, 00:06:58.194 { 00:06:58.194 "nbd_device": "/dev/nbd1", 00:06:58.194 "bdev_name": "Malloc1" 00:06:58.194 } 00:06:58.194 ]' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.194 /dev/nbd1' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.194 /dev/nbd1' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.194 17:13:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.194 256+0 records in 00:06:58.194 256+0 records out 00:06:58.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00915575 s, 115 MB/s 00:06:58.194 17:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.194 17:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.194 256+0 records in 00:06:58.194 256+0 records out 00:06:58.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243715 s, 43.0 MB/s 00:06:58.194 17:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.194 17:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.453 256+0 records in 00:06:58.453 256+0 records out 00:06:58.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290928 s, 36.0 MB/s 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.453 17:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.735 17:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.993 17:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.251 17:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.251 17:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.251 17:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.251 17:13:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.251 17:13:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.509 17:13:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.767 [2024-07-15 17:13:10.523881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.026 [2024-07-15 17:13:10.623710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.026 [2024-07-15 17:13:10.623717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.026 [2024-07-15 17:13:10.679448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.026 [2024-07-15 17:13:10.679537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.575 17:13:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.575 spdk_app_start Round 2 00:07:02.575 17:13:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:02.575 17:13:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 76788 /var/tmp/spdk-nbd.sock 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76788 ']' 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.575 17:13:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.833 17:13:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.833 17:13:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:02.833 17:13:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.091 Malloc0 00:07:03.363 17:13:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.643 Malloc1 00:07:03.643 17:13:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.643 17:13:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.902 /dev/nbd0 00:07:03.902 17:13:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.902 17:13:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.902 1+0 records in 00:07:03.902 1+0 records out 00:07:03.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292704 s, 14.0 MB/s 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:03.902 17:13:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:03.902 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.902 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.902 17:13:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.159 /dev/nbd1 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.159 1+0 records in 00:07:04.159 1+0 records out 00:07:04.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561078 s, 7.3 MB/s 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.159 17:13:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.159 17:13:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.417 17:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.417 { 00:07:04.417 "nbd_device": "/dev/nbd0", 00:07:04.417 "bdev_name": "Malloc0" 00:07:04.417 }, 00:07:04.417 { 00:07:04.417 "nbd_device": "/dev/nbd1", 00:07:04.417 "bdev_name": "Malloc1" 00:07:04.417 } 00:07:04.417 ]' 00:07:04.417 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.417 { 00:07:04.417 "nbd_device": "/dev/nbd0", 00:07:04.417 "bdev_name": "Malloc0" 00:07:04.418 }, 00:07:04.418 { 00:07:04.418 "nbd_device": "/dev/nbd1", 00:07:04.418 "bdev_name": "Malloc1" 00:07:04.418 } 00:07:04.418 ]' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.418 /dev/nbd1' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.418 /dev/nbd1' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.418 256+0 records in 00:07:04.418 256+0 records out 00:07:04.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678502 s, 155 MB/s 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.418 256+0 records in 00:07:04.418 256+0 records out 00:07:04.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247265 s, 42.4 MB/s 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.418 256+0 records in 00:07:04.418 256+0 records out 00:07:04.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298763 s, 35.1 MB/s 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.418 17:13:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.676 17:13:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.934 17:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.192 17:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.192 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.192 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.450 17:13:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.450 17:13:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.708 17:13:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.967 [2024-07-15 17:13:16.649273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.967 [2024-07-15 17:13:16.747820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.967 [2024-07-15 17:13:16.747837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.967 [2024-07-15 17:13:16.805500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.967 [2024-07-15 17:13:16.805594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.251 17:13:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 76788 /var/tmp/spdk-nbd.sock 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 76788 ']' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:09.251 17:13:19 event.app_repeat -- event/event.sh@39 -- # killprocess 76788 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 76788 ']' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 76788 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76788 00:07:09.251 killing process with pid 76788 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76788' 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@967 -- # kill 76788 00:07:09.251 17:13:19 event.app_repeat -- common/autotest_common.sh@972 -- # wait 76788 00:07:09.510 spdk_app_start is called in Round 0. 00:07:09.510 Shutdown signal received, stop current app iteration 00:07:09.510 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:07:09.510 spdk_app_start is called in Round 1. 00:07:09.510 Shutdown signal received, stop current app iteration 00:07:09.510 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:07:09.510 spdk_app_start is called in Round 2. 00:07:09.510 Shutdown signal received, stop current app iteration 00:07:09.510 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:07:09.510 spdk_app_start is called in Round 3. 00:07:09.510 Shutdown signal received, stop current app iteration 00:07:09.510 17:13:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:09.510 17:13:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:09.510 00:07:09.510 real 0m19.468s 00:07:09.510 user 0m43.436s 00:07:09.510 sys 0m3.037s 00:07:09.510 17:13:20 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.510 ************************************ 00:07:09.510 END TEST app_repeat 00:07:09.510 ************************************ 00:07:09.510 17:13:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.510 17:13:20 event -- common/autotest_common.sh@1142 -- # return 0 00:07:09.510 17:13:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.510 17:13:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.510 17:13:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.510 17:13:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.510 17:13:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.510 ************************************ 00:07:09.510 START TEST cpu_locks 00:07:09.510 ************************************ 00:07:09.510 17:13:20 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.510 * Looking for test storage... 00:07:09.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.510 17:13:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:09.510 17:13:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:09.510 17:13:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:09.510 17:13:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:09.510 17:13:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.510 17:13:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.510 17:13:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.510 ************************************ 00:07:09.510 START TEST default_locks 00:07:09.510 ************************************ 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=77222 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 77222 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 77222 ']' 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.510 17:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.769 [2024-07-15 17:13:20.443820] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:09.769 [2024-07-15 17:13:20.444030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77222 ] 00:07:09.769 [2024-07-15 17:13:20.602567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.769 [2024-07-15 17:13:20.622912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.028 [2024-07-15 17:13:20.759872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.595 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.595 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:10.595 17:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 77222 00:07:10.595 17:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 77222 00:07:10.595 17:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 77222 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 77222 ']' 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 77222 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77222 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.852 killing process with pid 77222 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.852 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77222' 00:07:10.853 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 77222 00:07:10.853 17:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 77222 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 77222 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77222 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 77222 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 77222 ']' 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.418 ERROR: process (pid: 77222) is no longer running 00:07:11.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77222) - No such process 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.418 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.419 00:07:11.419 real 0m1.806s 00:07:11.419 user 0m1.685s 00:07:11.419 sys 0m0.744s 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.419 17:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.419 ************************************ 00:07:11.419 END TEST default_locks 00:07:11.419 ************************************ 00:07:11.419 17:13:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.419 17:13:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.419 17:13:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.419 17:13:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.419 17:13:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.419 ************************************ 00:07:11.419 START TEST default_locks_via_rpc 00:07:11.419 ************************************ 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=77275 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 77275 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77275 ']' 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.419 17:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.419 [2024-07-15 17:13:22.261613] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:11.419 [2024-07-15 17:13:22.261791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77275 ] 00:07:11.676 [2024-07-15 17:13:22.408277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:11.676 [2024-07-15 17:13:22.430604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.676 [2024-07-15 17:13:22.529619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.635 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.635 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.635 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 77275 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 77275 00:07:12.636 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 77275 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 77275 ']' 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 77275 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77275 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.893 killing process with pid 77275 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77275' 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 77275 00:07:12.893 17:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 77275 00:07:13.460 00:07:13.460 real 0m1.969s 00:07:13.460 user 0m2.084s 00:07:13.460 sys 0m0.617s 00:07:13.460 17:13:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.460 ************************************ 00:07:13.460 END TEST default_locks_via_rpc 00:07:13.460 ************************************ 00:07:13.460 17:13:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.460 17:13:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.460 17:13:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.460 17:13:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.460 17:13:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.460 17:13:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.460 ************************************ 00:07:13.460 START TEST non_locking_app_on_locked_coremask 00:07:13.460 ************************************ 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=77327 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 77327 /var/tmp/spdk.sock 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77327 ']' 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.460 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.461 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.461 17:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.461 [2024-07-15 17:13:24.295957] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:13.461 [2024-07-15 17:13:24.296167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77327 ] 00:07:13.719 [2024-07-15 17:13:24.448783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.719 [2024-07-15 17:13:24.470283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.719 [2024-07-15 17:13:24.573392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=77343 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 77343 /var/tmp/spdk2.sock 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77343 ']' 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.655 17:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.655 [2024-07-15 17:13:25.313660] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:14.655 [2024-07-15 17:13:25.313834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77343 ] 00:07:14.655 [2024-07-15 17:13:25.465419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.655 [2024-07-15 17:13:25.495378] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.655 [2024-07-15 17:13:25.495474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.914 [2024-07-15 17:13:25.693479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.482 17:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.482 17:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.482 17:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 77327 00:07:15.482 17:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77327 00:07:15.482 17:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.416 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 77327 00:07:16.416 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77327 ']' 00:07:16.416 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77327 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77327 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.417 killing process with pid 77327 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77327' 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77327 00:07:16.417 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77327 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 77343 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77343 ']' 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77343 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77343 00:07:17.351 killing process with pid 77343 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77343' 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77343 00:07:17.351 17:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77343 00:07:17.609 ************************************ 00:07:17.609 END TEST non_locking_app_on_locked_coremask 00:07:17.609 ************************************ 00:07:17.609 00:07:17.609 real 0m4.240s 00:07:17.609 user 0m4.644s 00:07:17.609 sys 0m1.227s 00:07:17.609 17:13:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.610 17:13:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.610 17:13:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.610 17:13:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.610 17:13:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.610 17:13:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.610 17:13:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.868 ************************************ 00:07:17.868 START TEST locking_app_on_unlocked_coremask 00:07:17.868 ************************************ 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=77412 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 77412 /var/tmp/spdk.sock 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77412 ']' 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.868 17:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.868 [2024-07-15 17:13:28.589241] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:17.868 [2024-07-15 17:13:28.589445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77412 ] 00:07:18.127 [2024-07-15 17:13:28.733404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.127 [2024-07-15 17:13:28.753137] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.127 [2024-07-15 17:13:28.753264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.127 [2024-07-15 17:13:28.855804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=77428 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 77428 /var/tmp/spdk2.sock 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77428 ']' 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.693 17:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.951 [2024-07-15 17:13:29.652657] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:18.951 [2024-07-15 17:13:29.653105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77428 ] 00:07:19.209 [2024-07-15 17:13:29.832281] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.209 [2024-07-15 17:13:29.858919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.209 [2024-07-15 17:13:30.063516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.777 17:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.777 17:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:19.777 17:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 77428 00:07:19.777 17:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77428 00:07:19.777 17:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 77412 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77412 ']' 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 77412 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77412 00:07:20.736 killing process with pid 77412 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77412' 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 77412 00:07:20.736 17:13:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 77412 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 77428 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77428 ']' 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 77428 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77428 00:07:21.670 killing process with pid 77428 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77428' 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 77428 00:07:21.670 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 77428 00:07:21.944 ************************************ 00:07:21.944 END TEST locking_app_on_unlocked_coremask 00:07:21.944 ************************************ 00:07:21.944 00:07:21.944 real 0m4.280s 00:07:21.944 user 0m4.678s 00:07:21.944 sys 0m1.319s 00:07:21.944 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.944 17:13:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.944 17:13:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.944 17:13:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:21.944 17:13:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.944 17:13:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.944 17:13:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 ************************************ 00:07:22.203 START TEST locking_app_on_locked_coremask 00:07:22.203 ************************************ 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=77497 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 77497 /var/tmp/spdk.sock 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77497 ']' 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.203 17:13:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 [2024-07-15 17:13:32.905555] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:22.203 [2024-07-15 17:13:32.905727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77497 ] 00:07:22.203 [2024-07-15 17:13:33.050604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.461 [2024-07-15 17:13:33.072539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.461 [2024-07-15 17:13:33.176263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=77513 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 77513 /var/tmp/spdk2.sock 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77513 /var/tmp/spdk2.sock 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:23.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 77513 /var/tmp/spdk2.sock 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 77513 ']' 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.026 17:13:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.026 [2024-07-15 17:13:33.873450] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:23.026 [2024-07-15 17:13:33.873603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77513 ] 00:07:23.283 [2024-07-15 17:13:34.018306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.283 [2024-07-15 17:13:34.045341] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 77497 has claimed it. 00:07:23.283 [2024-07-15 17:13:34.049509] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:23.850 ERROR: process (pid: 77513) is no longer running 00:07:23.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77513) - No such process 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 77497 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 77497 00:07:23.850 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 77497 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 77497 ']' 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 77497 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.436 17:13:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77497 00:07:24.436 killing process with pid 77497 00:07:24.436 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.436 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.436 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77497' 00:07:24.436 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 77497 00:07:24.436 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 77497 00:07:24.694 00:07:24.694 real 0m2.660s 00:07:24.694 user 0m2.904s 00:07:24.694 sys 0m0.804s 00:07:24.694 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.694 17:13:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.694 ************************************ 00:07:24.694 END TEST locking_app_on_locked_coremask 00:07:24.694 ************************************ 00:07:24.694 17:13:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:24.694 17:13:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:24.694 17:13:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.694 17:13:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.694 17:13:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.694 ************************************ 00:07:24.694 START TEST locking_overlapped_coremask 00:07:24.694 ************************************ 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=77566 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 77566 /var/tmp/spdk.sock 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 77566 ']' 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.694 17:13:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 [2024-07-15 17:13:35.617402] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:24.954 [2024-07-15 17:13:35.617856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77566 ] 00:07:24.954 [2024-07-15 17:13:35.764238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.954 [2024-07-15 17:13:35.783739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.213 [2024-07-15 17:13:35.887956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.213 [2024-07-15 17:13:35.888014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.213 [2024-07-15 17:13:35.888079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=77584 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 77584 /var/tmp/spdk2.sock 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 77584 /var/tmp/spdk2.sock 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 77584 /var/tmp/spdk2.sock 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 77584 ']' 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.779 17:13:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.037 [2024-07-15 17:13:36.714421] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:26.037 [2024-07-15 17:13:36.714628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77584 ] 00:07:26.037 [2024-07-15 17:13:36.870010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.295 [2024-07-15 17:13:36.900502] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 77566 has claimed it. 00:07:26.295 [2024-07-15 17:13:36.900649] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.552 ERROR: process (pid: 77584) is no longer running 00:07:26.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (77584) - No such process 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 77566 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 77566 ']' 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 77566 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.552 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77566 00:07:26.808 killing process with pid 77566 00:07:26.808 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.808 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.808 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77566' 00:07:26.808 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 77566 00:07:26.808 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 77566 00:07:27.065 ************************************ 00:07:27.065 END TEST locking_overlapped_coremask 00:07:27.065 ************************************ 00:07:27.065 00:07:27.065 real 0m2.357s 00:07:27.065 user 0m6.343s 00:07:27.065 sys 0m0.600s 00:07:27.065 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.065 17:13:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.065 17:13:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:27.065 17:13:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.065 17:13:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.065 17:13:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.065 17:13:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.323 ************************************ 00:07:27.323 START TEST locking_overlapped_coremask_via_rpc 00:07:27.323 ************************************ 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:27.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=77626 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 77626 /var/tmp/spdk.sock 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77626 ']' 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.323 17:13:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.323 [2024-07-15 17:13:38.030049] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:27.323 [2024-07-15 17:13:38.030231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77626 ] 00:07:27.323 [2024-07-15 17:13:38.174718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.580 [2024-07-15 17:13:38.192583] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.580 [2024-07-15 17:13:38.192684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.580 [2024-07-15 17:13:38.295047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.580 [2024-07-15 17:13:38.295122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.580 [2024-07-15 17:13:38.295070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=77644 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 77644 /var/tmp/spdk2.sock 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77644 ']' 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.203 17:13:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.472 [2024-07-15 17:13:39.054611] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:28.472 [2024-07-15 17:13:39.055291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77644 ] 00:07:28.472 [2024-07-15 17:13:39.213086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.472 [2024-07-15 17:13:39.242520] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.472 [2024-07-15 17:13:39.242592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.730 [2024-07-15 17:13:39.452408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.730 [2024-07-15 17:13:39.455515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.730 [2024-07-15 17:13:39.455585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.296 [2024-07-15 17:13:40.049598] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 77626 has claimed it. 00:07:29.296 request: 00:07:29.296 { 00:07:29.296 "method": "framework_enable_cpumask_locks", 00:07:29.296 "req_id": 1 00:07:29.296 } 00:07:29.296 Got JSON-RPC error response 00:07:29.296 response: 00:07:29.296 { 00:07:29.296 "code": -32603, 00:07:29.296 "message": "Failed to claim CPU core: 2" 00:07:29.296 } 00:07:29.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 77626 /var/tmp/spdk.sock 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77626 ']' 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.296 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 77644 /var/tmp/spdk2.sock 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 77644 ']' 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.572 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.830 00:07:29.830 real 0m2.692s 00:07:29.830 user 0m1.413s 00:07:29.830 sys 0m0.205s 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.830 17:13:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.830 ************************************ 00:07:29.830 END TEST locking_overlapped_coremask_via_rpc 00:07:29.830 ************************************ 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:29.830 17:13:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.830 17:13:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77626 ]] 00:07:29.830 17:13:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77626 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77626 ']' 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77626 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77626 00:07:29.830 killing process with pid 77626 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77626' 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 77626 00:07:29.830 17:13:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 77626 00:07:30.395 17:13:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77644 ]] 00:07:30.395 17:13:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77644 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77644 ']' 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77644 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77644 00:07:30.395 killing process with pid 77644 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77644' 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 77644 00:07:30.395 17:13:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 77644 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 77626 ]] 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 77626 00:07:30.960 Process with pid 77626 is not found 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77626 ']' 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77626 00:07:30.960 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (77626) - No such process 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 77626 is not found' 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 77644 ]] 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 77644 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 77644 ']' 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 77644 00:07:30.960 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (77644) - No such process 00:07:30.960 Process with pid 77644 is not found 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 77644 is not found' 00:07:30.960 17:13:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.960 ************************************ 00:07:30.960 END TEST cpu_locks 00:07:30.960 ************************************ 00:07:30.960 00:07:30.960 real 0m21.478s 00:07:30.960 user 0m36.644s 00:07:30.960 sys 0m6.576s 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.960 17:13:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 17:13:41 event -- common/autotest_common.sh@1142 -- # return 0 00:07:30.960 ************************************ 00:07:30.960 END TEST event 00:07:30.960 ************************************ 00:07:30.960 00:07:30.960 real 0m50.212s 00:07:30.960 user 1m35.221s 00:07:30.960 sys 0m10.656s 00:07:30.960 17:13:41 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.960 17:13:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 17:13:41 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.960 17:13:41 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.960 17:13:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.960 17:13:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.960 17:13:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.960 ************************************ 00:07:30.960 START TEST thread 00:07:30.960 ************************************ 00:07:30.960 17:13:41 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:31.230 * Looking for test storage... 00:07:31.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:31.230 17:13:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.230 17:13:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:31.230 17:13:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.230 17:13:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.230 ************************************ 00:07:31.230 START TEST thread_poller_perf 00:07:31.230 ************************************ 00:07:31.230 17:13:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.230 [2024-07-15 17:13:41.911437] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:31.230 [2024-07-15 17:13:41.911631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77774 ] 00:07:31.230 [2024-07-15 17:13:42.064988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.519 [2024-07-15 17:13:42.083440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.519 [2024-07-15 17:13:42.185604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.519 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:32.455 ====================================== 00:07:32.455 busy:2211675994 (cyc) 00:07:32.455 total_run_count: 297000 00:07:32.456 tsc_hz: 2200000000 (cyc) 00:07:32.456 ====================================== 00:07:32.456 poller_cost: 7446 (cyc), 3384 (nsec) 00:07:32.456 00:07:32.456 real 0m1.426s 00:07:32.456 user 0m1.199s 00:07:32.456 sys 0m0.117s 00:07:32.456 17:13:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.456 17:13:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.456 ************************************ 00:07:32.456 END TEST thread_poller_perf 00:07:32.456 ************************************ 00:07:32.712 17:13:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:32.712 17:13:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.712 17:13:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:32.712 17:13:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.712 17:13:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.712 ************************************ 00:07:32.712 START TEST thread_poller_perf 00:07:32.712 ************************************ 00:07:32.712 17:13:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.712 [2024-07-15 17:13:43.385658] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:32.712 [2024-07-15 17:13:43.386165] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77805 ] 00:07:32.712 [2024-07-15 17:13:43.537796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.712 [2024-07-15 17:13:43.558775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.969 [2024-07-15 17:13:43.660008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.969 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:33.902 ====================================== 00:07:33.902 busy:2204117592 (cyc) 00:07:33.902 total_run_count: 3784000 00:07:33.902 tsc_hz: 2200000000 (cyc) 00:07:33.902 ====================================== 00:07:33.902 poller_cost: 582 (cyc), 264 (nsec) 00:07:34.160 00:07:34.160 real 0m1.420s 00:07:34.160 user 0m1.203s 00:07:34.160 sys 0m0.106s 00:07:34.160 17:13:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.160 17:13:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.160 ************************************ 00:07:34.160 END TEST thread_poller_perf 00:07:34.160 ************************************ 00:07:34.160 17:13:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:34.160 17:13:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:34.160 00:07:34.160 real 0m3.027s 00:07:34.160 user 0m2.466s 00:07:34.160 sys 0m0.336s 00:07:34.160 17:13:44 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.160 17:13:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.160 ************************************ 00:07:34.160 END TEST thread 00:07:34.160 ************************************ 00:07:34.160 17:13:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.160 17:13:44 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:34.160 17:13:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.160 17:13:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.160 17:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:34.160 ************************************ 00:07:34.160 START TEST accel 00:07:34.160 ************************************ 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:34.160 * Looking for test storage... 00:07:34.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:34.160 17:13:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:34.160 17:13:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:34.160 17:13:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:34.160 17:13:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=77886 00:07:34.160 17:13:44 accel -- accel/accel.sh@63 -- # waitforlisten 77886 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@829 -- # '[' -z 77886 ']' 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.160 17:13:44 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:34.160 17:13:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:34.160 17:13:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.160 17:13:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.160 17:13:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.160 17:13:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.160 17:13:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.160 17:13:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.160 17:13:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:34.160 17:13:44 accel -- accel/accel.sh@41 -- # jq -r . 00:07:34.418 [2024-07-15 17:13:45.052047] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:34.419 [2024-07-15 17:13:45.052228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77886 ] 00:07:34.419 [2024-07-15 17:13:45.199254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.419 [2024-07-15 17:13:45.220669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.677 [2024-07-15 17:13:45.322173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.244 17:13:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.244 17:13:45 accel -- common/autotest_common.sh@862 -- # return 0 00:07:35.244 17:13:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:35.244 17:13:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:35.244 17:13:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:35.244 17:13:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:35.244 17:13:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:35.244 17:13:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:35.244 17:13:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.244 17:13:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.244 17:13:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:35.244 17:13:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:35.244 17:13:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:35.244 17:13:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:35.244 17:13:46 accel -- accel/accel.sh@75 -- # killprocess 77886 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@948 -- # '[' -z 77886 ']' 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@952 -- # kill -0 77886 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@953 -- # uname 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77886 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.244 killing process with pid 77886 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77886' 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@967 -- # kill 77886 00:07:35.244 17:13:46 accel -- common/autotest_common.sh@972 -- # wait 77886 00:07:35.858 17:13:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:35.858 17:13:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.858 17:13:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:35.858 17:13:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:35.858 17:13:46 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.858 17:13:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.858 17:13:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.858 17:13:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.858 ************************************ 00:07:35.858 START TEST accel_missing_filename 00:07:35.858 ************************************ 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.858 17:13:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:35.858 17:13:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:36.118 [2024-07-15 17:13:46.709733] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:36.118 [2024-07-15 17:13:46.710021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77940 ] 00:07:36.118 [2024-07-15 17:13:46.855760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.118 [2024-07-15 17:13:46.877513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.376 [2024-07-15 17:13:46.990385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.376 [2024-07-15 17:13:47.060080] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.376 [2024-07-15 17:13:47.144998] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:36.635 A filename is required. 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.635 00:07:36.635 real 0m0.604s 00:07:36.635 user 0m0.374s 00:07:36.635 sys 0m0.187s 00:07:36.635 ************************************ 00:07:36.635 END TEST accel_missing_filename 00:07:36.635 ************************************ 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.635 17:13:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:36.635 17:13:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.635 17:13:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.635 17:13:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:36.635 17:13:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.635 17:13:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.635 ************************************ 00:07:36.635 START TEST accel_compress_verify 00:07:36.635 ************************************ 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.635 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:36.635 17:13:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:36.635 [2024-07-15 17:13:47.348428] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:36.635 [2024-07-15 17:13:47.348631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77965 ] 00:07:36.894 [2024-07-15 17:13:47.503312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.894 [2024-07-15 17:13:47.524326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.894 [2024-07-15 17:13:47.628113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.894 [2024-07-15 17:13:47.691632] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.152 [2024-07-15 17:13:47.787707] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:37.152 00:07:37.152 Compression does not support the verify option, aborting. 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:37.152 ************************************ 00:07:37.152 END TEST accel_compress_verify 00:07:37.152 ************************************ 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.152 00:07:37.152 real 0m0.599s 00:07:37.152 user 0m0.357s 00:07:37.152 sys 0m0.176s 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.152 17:13:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:37.152 17:13:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.152 17:13:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:37.152 17:13:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.152 17:13:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.152 17:13:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.152 ************************************ 00:07:37.152 START TEST accel_wrong_workload 00:07:37.152 ************************************ 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.152 17:13:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:37.152 17:13:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:37.152 Unsupported workload type: foobar 00:07:37.152 [2024-07-15 17:13:47.992108] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:37.411 accel_perf options: 00:07:37.411 [-h help message] 00:07:37.411 [-q queue depth per core] 00:07:37.411 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:37.411 [-T number of threads per core 00:07:37.411 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:37.411 [-t time in seconds] 00:07:37.411 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:37.411 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:37.411 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:37.411 [-l for compress/decompress workloads, name of uncompressed input file 00:07:37.411 [-S for crc32c workload, use this seed value (default 0) 00:07:37.411 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:37.411 [-f for fill workload, use this BYTE value (default 255) 00:07:37.411 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:37.411 [-y verify result if this switch is on] 00:07:37.411 [-a tasks to allocate per core (default: same value as -q)] 00:07:37.411 Can be used to spread operations across a wider range of memory. 00:07:37.411 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:37.411 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.411 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.412 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.412 ************************************ 00:07:37.412 END TEST accel_wrong_workload 00:07:37.412 ************************************ 00:07:37.412 00:07:37.412 real 0m0.068s 00:07:37.412 user 0m0.041s 00:07:37.412 sys 0m0.025s 00:07:37.412 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.412 17:13:48 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.412 17:13:48 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 ************************************ 00:07:37.412 START TEST accel_negative_buffers 00:07:37.412 ************************************ 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:37.412 17:13:48 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:37.412 -x option must be non-negative. 00:07:37.412 [2024-07-15 17:13:48.113290] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:37.412 accel_perf options: 00:07:37.412 [-h help message] 00:07:37.412 [-q queue depth per core] 00:07:37.412 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:37.412 [-T number of threads per core 00:07:37.412 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:37.412 [-t time in seconds] 00:07:37.412 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:37.412 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:37.412 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:37.412 [-l for compress/decompress workloads, name of uncompressed input file 00:07:37.412 [-S for crc32c workload, use this seed value (default 0) 00:07:37.412 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:37.412 [-f for fill workload, use this BYTE value (default 255) 00:07:37.412 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:37.412 [-y verify result if this switch is on] 00:07:37.412 [-a tasks to allocate per core (default: same value as -q)] 00:07:37.412 Can be used to spread operations across a wider range of memory. 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.412 00:07:37.412 real 0m0.069s 00:07:37.412 user 0m0.082s 00:07:37.412 sys 0m0.033s 00:07:37.412 ************************************ 00:07:37.412 END TEST accel_negative_buffers 00:07:37.412 ************************************ 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.412 17:13:48 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.412 17:13:48 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.412 17:13:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.412 ************************************ 00:07:37.412 START TEST accel_crc32c 00:07:37.412 ************************************ 00:07:37.412 17:13:48 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:37.412 17:13:48 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:37.412 [2024-07-15 17:13:48.239340] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:37.412 [2024-07-15 17:13:48.239587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78032 ] 00:07:37.671 [2024-07-15 17:13:48.394275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.671 [2024-07-15 17:13:48.417553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.671 [2024-07-15 17:13:48.523619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.929 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.930 17:13:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.306 17:13:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.306 00:07:39.306 real 0m1.593s 00:07:39.306 user 0m1.320s 00:07:39.306 sys 0m0.180s 00:07:39.306 17:13:49 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.306 17:13:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:39.306 ************************************ 00:07:39.306 END TEST accel_crc32c 00:07:39.306 ************************************ 00:07:39.306 17:13:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.306 17:13:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:39.306 17:13:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.306 17:13:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.306 17:13:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.306 ************************************ 00:07:39.306 START TEST accel_crc32c_C2 00:07:39.306 ************************************ 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.306 17:13:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:39.306 [2024-07-15 17:13:49.877305] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:39.306 [2024-07-15 17:13:49.877542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78068 ] 00:07:39.306 [2024-07-15 17:13:50.022956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.306 [2024-07-15 17:13:50.043969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.306 [2024-07-15 17:13:50.154397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.566 17:13:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.941 00:07:40.941 real 0m1.579s 00:07:40.941 user 0m1.320s 00:07:40.941 sys 0m0.168s 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.941 17:13:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:40.941 ************************************ 00:07:40.941 END TEST accel_crc32c_C2 00:07:40.941 ************************************ 00:07:40.941 17:13:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.941 17:13:51 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:40.941 17:13:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:40.941 17:13:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.941 17:13:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.941 ************************************ 00:07:40.941 START TEST accel_copy 00:07:40.941 ************************************ 00:07:40.941 17:13:51 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:40.941 17:13:51 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:40.941 [2024-07-15 17:13:51.511835] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:40.942 [2024-07-15 17:13:51.512018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78103 ] 00:07:40.942 [2024-07-15 17:13:51.663604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.942 [2024-07-15 17:13:51.682873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.942 [2024-07-15 17:13:51.783397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.200 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.201 17:13:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.572 ************************************ 00:07:42.572 END TEST accel_copy 00:07:42.572 ************************************ 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:42.572 17:13:53 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.572 00:07:42.572 real 0m1.574s 00:07:42.572 user 0m1.302s 00:07:42.572 sys 0m0.180s 00:07:42.572 17:13:53 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.572 17:13:53 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 17:13:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.572 17:13:53 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.572 17:13:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:42.572 17:13:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.572 17:13:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 ************************************ 00:07:42.572 START TEST accel_fill 00:07:42.572 ************************************ 00:07:42.572 17:13:53 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:42.572 17:13:53 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:42.572 [2024-07-15 17:13:53.141871] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:42.572 [2024-07-15 17:13:53.142077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78139 ] 00:07:42.572 [2024-07-15 17:13:53.294186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.572 [2024-07-15 17:13:53.316651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.572 [2024-07-15 17:13:53.417950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 17:13:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:44.210 17:13:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.210 00:07:44.210 real 0m1.579s 00:07:44.210 user 0m1.317s 00:07:44.210 sys 0m0.168s 00:07:44.210 17:13:54 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.210 17:13:54 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:44.210 ************************************ 00:07:44.210 END TEST accel_fill 00:07:44.210 ************************************ 00:07:44.210 17:13:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.210 17:13:54 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:44.210 17:13:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.210 17:13:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.210 17:13:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.210 ************************************ 00:07:44.210 START TEST accel_copy_crc32c 00:07:44.210 ************************************ 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:44.210 17:13:54 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:44.210 [2024-07-15 17:13:54.765999] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:44.210 [2024-07-15 17:13:54.766180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78180 ] 00:07:44.211 [2024-07-15 17:13:54.917545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.211 [2024-07-15 17:13:54.940844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.211 [2024-07-15 17:13:55.040885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.469 17:13:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.841 00:07:45.841 real 0m1.581s 00:07:45.841 user 0m1.314s 00:07:45.841 sys 0m0.172s 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.841 17:13:56 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:45.841 ************************************ 00:07:45.841 END TEST accel_copy_crc32c 00:07:45.841 ************************************ 00:07:45.841 17:13:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.841 17:13:56 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.841 17:13:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.841 17:13:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.841 17:13:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.841 ************************************ 00:07:45.841 START TEST accel_copy_crc32c_C2 00:07:45.841 ************************************ 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.841 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:45.841 [2024-07-15 17:13:56.397825] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:45.841 [2024-07-15 17:13:56.398029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78215 ] 00:07:45.841 [2024-07-15 17:13:56.549578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.841 [2024-07-15 17:13:56.570099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.841 [2024-07-15 17:13:56.671264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.099 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.100 17:13:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.478 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.478 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.478 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.478 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.478 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.479 00:07:47.479 real 0m1.574s 00:07:47.479 user 0m1.294s 00:07:47.479 sys 0m0.191s 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.479 17:13:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 ************************************ 00:07:47.479 END TEST accel_copy_crc32c_C2 00:07:47.479 ************************************ 00:07:47.479 17:13:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.479 17:13:57 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:47.479 17:13:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:47.479 17:13:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.479 17:13:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 ************************************ 00:07:47.479 START TEST accel_dualcast 00:07:47.479 ************************************ 00:07:47.479 17:13:57 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:47.479 17:13:57 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:47.479 [2024-07-15 17:13:58.023023] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:47.479 [2024-07-15 17:13:58.023233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78251 ] 00:07:47.479 [2024-07-15 17:13:58.174897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.479 [2024-07-15 17:13:58.196840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.479 [2024-07-15 17:13:58.298818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.737 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.738 17:13:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:49.113 17:13:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.113 00:07:49.113 real 0m1.571s 00:07:49.113 user 0m1.312s 00:07:49.113 sys 0m0.166s 00:07:49.113 17:13:59 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.113 ************************************ 00:07:49.113 END TEST accel_dualcast 00:07:49.113 ************************************ 00:07:49.113 17:13:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:49.113 17:13:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.113 17:13:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:49.113 17:13:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.113 17:13:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.113 17:13:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.113 ************************************ 00:07:49.113 START TEST accel_compare 00:07:49.113 ************************************ 00:07:49.113 17:13:59 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:49.113 17:13:59 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:49.113 [2024-07-15 17:13:59.631689] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:49.114 [2024-07-15 17:13:59.631856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78286 ] 00:07:49.114 [2024-07-15 17:13:59.774016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.114 [2024-07-15 17:13:59.798177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.114 [2024-07-15 17:13:59.906229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.114 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:49.372 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.373 17:13:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.306 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:50.307 17:14:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.307 00:07:50.307 real 0m1.566s 00:07:50.307 user 0m0.014s 00:07:50.307 sys 0m0.008s 00:07:50.307 17:14:01 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.307 ************************************ 00:07:50.307 END TEST accel_compare 00:07:50.307 ************************************ 00:07:50.307 17:14:01 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:50.565 17:14:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.565 17:14:01 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:50.565 17:14:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:50.565 17:14:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.565 17:14:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.565 ************************************ 00:07:50.565 START TEST accel_xor 00:07:50.565 ************************************ 00:07:50.565 17:14:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:50.565 17:14:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:50.565 [2024-07-15 17:14:01.251559] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:50.565 [2024-07-15 17:14:01.251760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78322 ] 00:07:50.565 [2024-07-15 17:14:01.404819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.824 [2024-07-15 17:14:01.429432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.824 [2024-07-15 17:14:01.531122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.824 17:14:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.197 00:07:52.197 real 0m1.585s 00:07:52.197 user 0m0.017s 00:07:52.197 sys 0m0.002s 00:07:52.197 17:14:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.197 ************************************ 00:07:52.197 END TEST accel_xor 00:07:52.197 ************************************ 00:07:52.197 17:14:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:52.197 17:14:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.197 17:14:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:52.197 17:14:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.197 17:14:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.197 17:14:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.197 ************************************ 00:07:52.197 START TEST accel_xor 00:07:52.197 ************************************ 00:07:52.197 17:14:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.197 17:14:02 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:52.198 17:14:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:52.198 [2024-07-15 17:14:02.889702] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:52.198 [2024-07-15 17:14:02.889910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78363 ] 00:07:52.198 [2024-07-15 17:14:03.042562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.455 [2024-07-15 17:14:03.067047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.455 [2024-07-15 17:14:03.161835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.455 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.456 17:14:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:53.826 17:14:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.826 00:07:53.826 real 0m1.574s 00:07:53.826 user 0m1.296s 00:07:53.826 sys 0m0.184s 00:07:53.826 17:14:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.826 17:14:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 ************************************ 00:07:53.826 END TEST accel_xor 00:07:53.826 ************************************ 00:07:53.826 17:14:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.826 17:14:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:53.826 17:14:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:53.826 17:14:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.826 17:14:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.826 ************************************ 00:07:53.826 START TEST accel_dif_verify 00:07:53.826 ************************************ 00:07:53.826 17:14:04 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:53.826 17:14:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:53.826 [2024-07-15 17:14:04.502188] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:53.826 [2024-07-15 17:14:04.502356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78393 ] 00:07:53.826 [2024-07-15 17:14:04.644961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.826 [2024-07-15 17:14:04.665149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.084 [2024-07-15 17:14:04.759127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.084 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:54.085 17:14:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:55.526 17:14:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.526 00:07:55.526 real 0m1.548s 00:07:55.526 user 0m1.295s 00:07:55.526 sys 0m0.159s 00:07:55.526 ************************************ 00:07:55.526 END TEST accel_dif_verify 00:07:55.526 ************************************ 00:07:55.526 17:14:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.526 17:14:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:55.526 17:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.526 17:14:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:55.526 17:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:55.526 17:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.526 17:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.526 ************************************ 00:07:55.526 START TEST accel_dif_generate 00:07:55.526 ************************************ 00:07:55.526 17:14:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.526 17:14:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:55.527 17:14:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:55.527 [2024-07-15 17:14:06.107928] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:55.527 [2024-07-15 17:14:06.108116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78434 ] 00:07:55.527 [2024-07-15 17:14:06.258204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.527 [2024-07-15 17:14:06.282573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.784 [2024-07-15 17:14:06.385263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.784 17:14:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:57.186 17:14:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.186 00:07:57.186 real 0m1.574s 00:07:57.186 user 0m0.013s 00:07:57.186 sys 0m0.005s 00:07:57.186 17:14:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.186 ************************************ 00:07:57.186 END TEST accel_dif_generate 00:07:57.186 ************************************ 00:07:57.186 17:14:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 17:14:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.186 17:14:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:57.186 17:14:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:57.186 17:14:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.186 17:14:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 ************************************ 00:07:57.186 START TEST accel_dif_generate_copy 00:07:57.186 ************************************ 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:57.186 17:14:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:57.186 [2024-07-15 17:14:07.720500] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:57.186 [2024-07-15 17:14:07.720647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78464 ] 00:07:57.186 [2024-07-15 17:14:07.862698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.186 [2024-07-15 17:14:07.880631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.186 [2024-07-15 17:14:07.981831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.446 17:14:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.382 ************************************ 00:07:58.382 END TEST accel_dif_generate_copy 00:07:58.382 ************************************ 00:07:58.382 00:07:58.382 real 0m1.554s 00:07:58.382 user 0m1.296s 00:07:58.382 sys 0m0.165s 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.382 17:14:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.641 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.641 17:14:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:58.641 17:14:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.641 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:58.641 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.641 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.641 ************************************ 00:07:58.641 START TEST accel_comp 00:07:58.641 ************************************ 00:07:58.641 17:14:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:58.641 17:14:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:58.641 [2024-07-15 17:14:09.332731] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:58.641 [2024-07-15 17:14:09.332911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78505 ] 00:07:58.641 [2024-07-15 17:14:09.474127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.641 [2024-07-15 17:14:09.495346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.900 [2024-07-15 17:14:09.595308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.900 17:14:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:00.296 17:14:10 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.296 00:08:00.296 real 0m1.564s 00:08:00.296 user 0m1.311s 00:08:00.296 sys 0m0.159s 00:08:00.296 17:14:10 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.296 17:14:10 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:00.296 ************************************ 00:08:00.297 END TEST accel_comp 00:08:00.297 ************************************ 00:08:00.297 17:14:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.297 17:14:10 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.297 17:14:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:00.297 17:14:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.297 17:14:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.297 ************************************ 00:08:00.297 START TEST accel_decomp 00:08:00.297 ************************************ 00:08:00.297 17:14:10 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:00.297 17:14:10 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:00.297 [2024-07-15 17:14:10.967487] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:00.297 [2024-07-15 17:14:10.967703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78535 ] 00:08:00.297 [2024-07-15 17:14:11.120043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.297 [2024-07-15 17:14:11.139429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.556 [2024-07-15 17:14:11.250764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.556 17:14:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.932 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 ************************************ 00:08:01.933 END TEST accel_decomp 00:08:01.933 ************************************ 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:01.933 17:14:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.933 00:08:01.933 real 0m1.599s 00:08:01.933 user 0m0.017s 00:08:01.933 sys 0m0.002s 00:08:01.933 17:14:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.933 17:14:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:01.933 17:14:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.933 17:14:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.933 17:14:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:01.933 17:14:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.933 17:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.933 ************************************ 00:08:01.933 START TEST accel_decomp_full 00:08:01.933 ************************************ 00:08:01.933 17:14:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:01.933 17:14:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:01.933 [2024-07-15 17:14:12.597049] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:01.933 [2024-07-15 17:14:12.597244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78576 ] 00:08:01.933 [2024-07-15 17:14:12.748298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.933 [2024-07-15 17:14:12.769699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.191 [2024-07-15 17:14:12.868650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:02.191 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.192 17:14:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.563 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.564 17:14:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.564 00:08:03.564 real 0m1.584s 00:08:03.564 user 0m1.318s 00:08:03.564 sys 0m0.173s 00:08:03.564 17:14:14 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.564 17:14:14 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:03.564 ************************************ 00:08:03.564 END TEST accel_decomp_full 00:08:03.564 ************************************ 00:08:03.564 17:14:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.564 17:14:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.564 17:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:03.564 17:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.564 17:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.564 ************************************ 00:08:03.564 START TEST accel_decomp_mcore 00:08:03.564 ************************************ 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:03.564 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:03.564 [2024-07-15 17:14:14.225397] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:03.564 [2024-07-15 17:14:14.225620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78612 ] 00:08:03.564 [2024-07-15 17:14:14.372301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.564 [2024-07-15 17:14:14.393600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.822 [2024-07-15 17:14:14.497242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.822 [2024-07-15 17:14:14.497442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.822 [2024-07-15 17:14:14.497498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.822 [2024-07-15 17:14:14.497632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.822 17:14:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:05.219 17:14:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.219 00:08:05.219 real 0m1.587s 00:08:05.219 user 0m0.019s 00:08:05.220 sys 0m0.003s 00:08:05.220 17:14:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.220 ************************************ 00:08:05.220 END TEST accel_decomp_mcore 00:08:05.220 ************************************ 00:08:05.220 17:14:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 17:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.220 17:14:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.220 17:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:05.220 17:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.220 17:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 ************************************ 00:08:05.220 START TEST accel_decomp_full_mcore 00:08:05.220 ************************************ 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:05.220 17:14:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:05.220 [2024-07-15 17:14:15.855794] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:05.220 [2024-07-15 17:14:15.855989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78650 ] 00:08:05.220 [2024-07-15 17:14:16.000456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.220 [2024-07-15 17:14:16.021708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.478 [2024-07-15 17:14:16.127020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.478 [2024-07-15 17:14:16.127195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.478 [2024-07-15 17:14:16.127278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.478 [2024-07-15 17:14:16.127411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.478 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.479 17:14:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.853 00:08:06.853 real 0m1.586s 00:08:06.853 user 0m0.015s 00:08:06.853 sys 0m0.004s 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.853 ************************************ 00:08:06.853 END TEST accel_decomp_full_mcore 00:08:06.853 ************************************ 00:08:06.853 17:14:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:06.853 17:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.853 17:14:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.853 17:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:06.853 17:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.853 17:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.853 ************************************ 00:08:06.853 START TEST accel_decomp_mthread 00:08:06.853 ************************************ 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:06.853 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:06.853 [2024-07-15 17:14:17.485782] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:06.853 [2024-07-15 17:14:17.485957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78694 ] 00:08:06.853 [2024-07-15 17:14:17.629042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.853 [2024-07-15 17:14:17.649558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.111 [2024-07-15 17:14:17.744116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.111 17:14:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.519 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.519 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.519 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.520 17:14:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.520 17:14:19 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.520 00:08:08.520 real 0m1.555s 00:08:08.520 user 0m1.286s 00:08:08.520 sys 0m0.174s 00:08:08.520 ************************************ 00:08:08.520 END TEST accel_decomp_mthread 00:08:08.520 ************************************ 00:08:08.520 17:14:19 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.520 17:14:19 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:08.520 17:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.520 17:14:19 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.520 17:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:08.520 17:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.520 17:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.520 ************************************ 00:08:08.520 START TEST accel_decomp_full_mthread 00:08:08.520 ************************************ 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:08.520 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:08.520 [2024-07-15 17:14:19.103891] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:08.520 [2024-07-15 17:14:19.104079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78724 ] 00:08:08.520 [2024-07-15 17:14:19.254209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.520 [2024-07-15 17:14:19.274654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.778 [2024-07-15 17:14:19.375923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.778 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.779 17:14:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.885 00:08:09.885 real 0m1.601s 00:08:09.885 user 0m0.018s 00:08:09.885 sys 0m0.005s 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.885 17:14:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 ************************************ 00:08:09.885 END TEST accel_decomp_full_mthread 00:08:09.885 ************************************ 00:08:09.885 17:14:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.885 17:14:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:09.885 17:14:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:09.885 17:14:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:09.885 17:14:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.885 17:14:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.885 17:14:20 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.885 17:14:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.885 17:14:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.885 17:14:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.885 17:14:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.885 17:14:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:09.885 17:14:20 accel -- accel/accel.sh@41 -- # jq -r . 00:08:09.885 17:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 ************************************ 00:08:09.885 START TEST accel_dif_functional_tests 00:08:09.885 ************************************ 00:08:09.886 17:14:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.141 [2024-07-15 17:14:20.803234] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:10.141 [2024-07-15 17:14:20.803433] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78766 ] 00:08:10.141 [2024-07-15 17:14:20.958753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.142 [2024-07-15 17:14:20.981055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.399 [2024-07-15 17:14:21.120509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.399 [2024-07-15 17:14:21.120571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.399 [2024-07-15 17:14:21.120629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.657 00:08:10.657 00:08:10.657 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.657 http://cunit.sourceforge.net/ 00:08:10.657 00:08:10.657 00:08:10.657 Suite: accel_dif 00:08:10.657 Test: verify: DIF generated, GUARD check ...passed 00:08:10.657 Test: verify: DIF generated, APPTAG check ...passed 00:08:10.657 Test: verify: DIF generated, REFTAG check ...passed 00:08:10.657 Test: verify: DIF not generated, GUARD check ...passed 00:08:10.657 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 17:14:21.265786] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.657 passed 00:08:10.657 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 17:14:21.265938] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.657 passed 00:08:10.657 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:10.657 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 17:14:21.266027] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.657 passed 00:08:10.657 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:10.657 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:10.657 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-15 17:14:21.266299] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:10.657 passed 00:08:10.657 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:10.657 Test: verify copy: DIF generated, GUARD check ...passed 00:08:10.657 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:10.657 Test: verify copy: DIF generated, REFTAG check ...passed[2024-07-15 17:14:21.266594] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:10.657 00:08:10.657 Test: verify copy: DIF not generated, GUARD check ...passed 00:08:10.657 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 17:14:21.267059] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.657 passed 00:08:10.657 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 17:14:21.267147] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.657 passed 00:08:10.657 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 17:14:21.267456] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.657 passed 00:08:10.657 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:10.657 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:10.657 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:10.657 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:10.657 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:10.657 Test: generate copy: iovecs-len validate ...passed 00:08:10.657 Test: generate copy: buffer alignment validate ...[2024-07-15 17:14:21.268258] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:10.657 passed 00:08:10.657 00:08:10.657 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.657 suites 1 1 n/a 0 0 00:08:10.657 tests 26 26 26 0 0 00:08:10.657 asserts 115 115 115 0 n/a 00:08:10.658 00:08:10.658 Elapsed time = 0.009 seconds 00:08:10.916 00:08:10.916 real 0m0.955s 00:08:10.916 user 0m1.289s 00:08:10.916 sys 0m0.303s 00:08:10.916 17:14:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.916 17:14:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:10.916 ************************************ 00:08:10.916 END TEST accel_dif_functional_tests 00:08:10.916 ************************************ 00:08:10.916 17:14:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.916 00:08:10.916 real 0m36.845s 00:08:10.916 user 0m37.339s 00:08:10.916 sys 0m5.601s 00:08:10.916 17:14:21 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.916 17:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.916 ************************************ 00:08:10.916 END TEST accel 00:08:10.916 ************************************ 00:08:10.916 17:14:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:10.916 17:14:21 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:10.916 17:14:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.916 17:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.916 17:14:21 -- common/autotest_common.sh@10 -- # set +x 00:08:10.916 ************************************ 00:08:10.916 START TEST accel_rpc 00:08:10.916 ************************************ 00:08:10.916 17:14:21 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:11.174 * Looking for test storage... 00:08:11.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:11.174 17:14:21 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.174 17:14:21 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=78837 00:08:11.174 17:14:21 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:11.174 17:14:21 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 78837 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 78837 ']' 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.174 17:14:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.174 [2024-07-15 17:14:21.971540] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:11.174 [2024-07-15 17:14:21.971973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78837 ] 00:08:11.432 [2024-07-15 17:14:22.126695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.432 [2024-07-15 17:14:22.152127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.432 [2024-07-15 17:14:22.271396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.366 17:14:22 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.366 17:14:22 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:12.366 17:14:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:12.366 17:14:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:12.366 17:14:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:12.366 17:14:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:12.366 17:14:22 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:12.366 17:14:22 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.366 17:14:22 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.366 17:14:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.366 ************************************ 00:08:12.366 START TEST accel_assign_opcode 00:08:12.366 ************************************ 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:12.366 [2024-07-15 17:14:22.940394] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:12.366 [2024-07-15 17:14:22.948388] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.366 17:14:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:12.366 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.624 software 00:08:12.624 00:08:12.624 real 0m0.306s 00:08:12.624 user 0m0.050s 00:08:12.624 sys 0m0.010s 00:08:12.624 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.624 17:14:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:12.624 ************************************ 00:08:12.624 END TEST accel_assign_opcode 00:08:12.624 ************************************ 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:12.624 17:14:23 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 78837 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 78837 ']' 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 78837 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78837 00:08:12.624 killing process with pid 78837 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78837' 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@967 -- # kill 78837 00:08:12.624 17:14:23 accel_rpc -- common/autotest_common.sh@972 -- # wait 78837 00:08:13.189 ************************************ 00:08:13.189 END TEST accel_rpc 00:08:13.189 ************************************ 00:08:13.189 00:08:13.189 real 0m2.027s 00:08:13.189 user 0m2.020s 00:08:13.189 sys 0m0.581s 00:08:13.189 17:14:23 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.189 17:14:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.189 17:14:23 -- common/autotest_common.sh@1142 -- # return 0 00:08:13.189 17:14:23 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:13.189 17:14:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.189 17:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.189 17:14:23 -- common/autotest_common.sh@10 -- # set +x 00:08:13.189 ************************************ 00:08:13.189 START TEST app_cmdline 00:08:13.189 ************************************ 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:13.189 * Looking for test storage... 00:08:13.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:13.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.189 17:14:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:13.189 17:14:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=78931 00:08:13.189 17:14:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 78931 00:08:13.189 17:14:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 78931 ']' 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.189 17:14:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.446 [2024-07-15 17:14:24.059096] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:13.446 [2024-07-15 17:14:24.059601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78931 ] 00:08:13.446 [2024-07-15 17:14:24.215583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.446 [2024-07-15 17:14:24.233924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.704 [2024-07-15 17:14:24.370287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.272 17:14:24 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.272 17:14:24 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:14.272 17:14:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:14.530 { 00:08:14.530 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:08:14.530 "fields": { 00:08:14.530 "major": 24, 00:08:14.530 "minor": 9, 00:08:14.530 "patch": 0, 00:08:14.530 "suffix": "-pre", 00:08:14.530 "commit": "a95bbf233" 00:08:14.530 } 00:08:14.530 } 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:14.530 17:14:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:14.530 17:14:25 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:14.788 request: 00:08:14.788 { 00:08:14.788 "method": "env_dpdk_get_mem_stats", 00:08:14.788 "req_id": 1 00:08:14.788 } 00:08:14.788 Got JSON-RPC error response 00:08:14.788 response: 00:08:14.788 { 00:08:14.788 "code": -32601, 00:08:14.788 "message": "Method not found" 00:08:14.788 } 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:14.788 17:14:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 78931 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 78931 ']' 00:08:14.788 17:14:25 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 78931 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78931 00:08:15.047 killing process with pid 78931 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78931' 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@967 -- # kill 78931 00:08:15.047 17:14:25 app_cmdline -- common/autotest_common.sh@972 -- # wait 78931 00:08:15.305 00:08:15.305 real 0m2.288s 00:08:15.305 user 0m2.691s 00:08:15.305 sys 0m0.686s 00:08:15.305 ************************************ 00:08:15.305 END TEST app_cmdline 00:08:15.305 ************************************ 00:08:15.305 17:14:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.305 17:14:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:15.563 17:14:26 -- common/autotest_common.sh@1142 -- # return 0 00:08:15.563 17:14:26 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:15.563 17:14:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.563 17:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.563 17:14:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.563 ************************************ 00:08:15.563 START TEST version 00:08:15.563 ************************************ 00:08:15.563 17:14:26 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:15.563 * Looking for test storage... 00:08:15.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:15.563 17:14:26 version -- app/version.sh@17 -- # get_header_version major 00:08:15.563 17:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # cut -f2 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:15.564 17:14:26 version -- app/version.sh@17 -- # major=24 00:08:15.564 17:14:26 version -- app/version.sh@18 -- # get_header_version minor 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # cut -f2 00:08:15.564 17:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:15.564 17:14:26 version -- app/version.sh@18 -- # minor=9 00:08:15.564 17:14:26 version -- app/version.sh@19 -- # get_header_version patch 00:08:15.564 17:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # cut -f2 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:15.564 17:14:26 version -- app/version.sh@19 -- # patch=0 00:08:15.564 17:14:26 version -- app/version.sh@20 -- # get_header_version suffix 00:08:15.564 17:14:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # cut -f2 00:08:15.564 17:14:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:15.564 17:14:26 version -- app/version.sh@20 -- # suffix=-pre 00:08:15.564 17:14:26 version -- app/version.sh@22 -- # version=24.9 00:08:15.564 17:14:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:15.564 17:14:26 version -- app/version.sh@28 -- # version=24.9rc0 00:08:15.564 17:14:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:15.564 17:14:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:15.564 17:14:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:15.564 17:14:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:15.564 00:08:15.564 real 0m0.147s 00:08:15.564 user 0m0.082s 00:08:15.564 sys 0m0.093s 00:08:15.564 ************************************ 00:08:15.564 END TEST version 00:08:15.564 ************************************ 00:08:15.564 17:14:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.564 17:14:26 version -- common/autotest_common.sh@10 -- # set +x 00:08:15.564 17:14:26 -- common/autotest_common.sh@1142 -- # return 0 00:08:15.564 17:14:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:15.564 17:14:26 -- spdk/autotest.sh@198 -- # uname -s 00:08:15.564 17:14:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:15.564 17:14:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:15.564 17:14:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:15.564 17:14:26 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:15.564 17:14:26 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:15.564 17:14:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.564 17:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.564 17:14:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.564 ************************************ 00:08:15.564 START TEST blockdev_nvme 00:08:15.564 ************************************ 00:08:15.564 17:14:26 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:15.822 * Looking for test storage... 00:08:15.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:15.822 17:14:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:15.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79082 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:15.822 17:14:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 79082 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 79082 ']' 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.822 17:14:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.822 [2024-07-15 17:14:26.575778] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:15.822 [2024-07-15 17:14:26.576228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79082 ] 00:08:16.082 [2024-07-15 17:14:26.723413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.082 [2024-07-15 17:14:26.740630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.082 [2024-07-15 17:14:26.842329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.652 17:14:27 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.652 17:14:27 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:08:16.652 17:14:27 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:16.652 17:14:27 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:16.652 17:14:27 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:16.652 17:14:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:16.652 17:14:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:16.910 17:14:27 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:16.910 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.910 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.168 17:14:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 17:14:27 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:17.168 17:14:28 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.427 17:14:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:17.427 17:14:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:17.428 17:14:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "83d497fd-b804-441a-8764-4445a5c70102"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "83d497fd-b804-441a-8764-4445a5c70102",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2a4407d9-5c59-417a-813e-46225ed5b6de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2a4407d9-5c59-417a-813e-46225ed5b6de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3d76583e-cc29-45c8-a086-b64ca972d930"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d76583e-cc29-45c8-a086-b64ca972d930",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bf7439b7-1bb2-4447-83da-b898b1449693"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf7439b7-1bb2-4447-83da-b898b1449693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ce2476bf-1381-4319-a80e-250ea1e7fdff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce2476bf-1381-4319-a80e-250ea1e7fdff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e2718a03-1f69-4756-b9ea-3809e6849592"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e2718a03-1f69-4756-b9ea-3809e6849592",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:17.428 17:14:28 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:17.428 17:14:28 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:17.428 17:14:28 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:17.428 17:14:28 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 79082 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 79082 ']' 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 79082 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79082 00:08:17.428 killing process with pid 79082 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79082' 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 79082 00:08:17.428 17:14:28 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 79082 00:08:17.993 17:14:28 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:17.993 17:14:28 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:17.993 17:14:28 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:17.993 17:14:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.993 17:14:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.993 ************************************ 00:08:17.993 START TEST bdev_hello_world 00:08:17.993 ************************************ 00:08:17.993 17:14:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:18.251 [2024-07-15 17:14:28.856821] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:18.251 [2024-07-15 17:14:28.857034] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79155 ] 00:08:18.251 [2024-07-15 17:14:29.005257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.251 [2024-07-15 17:14:29.026513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.510 [2024-07-15 17:14:29.158143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.768 [2024-07-15 17:14:29.621949] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:18.768 [2024-07-15 17:14:29.622026] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:18.768 [2024-07-15 17:14:29.622071] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:19.026 [2024-07-15 17:14:29.625139] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:19.026 [2024-07-15 17:14:29.625755] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:19.026 [2024-07-15 17:14:29.625801] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:19.026 [2024-07-15 17:14:29.626045] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:19.026 00:08:19.026 [2024-07-15 17:14:29.626101] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:19.285 00:08:19.285 real 0m1.242s 00:08:19.285 user 0m0.833s 00:08:19.285 sys 0m0.299s 00:08:19.285 17:14:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.285 17:14:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:19.285 ************************************ 00:08:19.285 END TEST bdev_hello_world 00:08:19.285 ************************************ 00:08:19.285 17:14:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:19.285 17:14:30 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:19.285 17:14:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.285 17:14:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.285 17:14:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.285 ************************************ 00:08:19.285 START TEST bdev_bounds 00:08:19.285 ************************************ 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:19.285 Process bdevio pid: 79186 00:08:19.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79186 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79186' 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79186 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 79186 ']' 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.285 17:14:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:19.543 [2024-07-15 17:14:30.154339] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:19.543 [2024-07-15 17:14:30.154569] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79186 ] 00:08:19.543 [2024-07-15 17:14:30.304269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.543 [2024-07-15 17:14:30.320972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.800 [2024-07-15 17:14:30.460589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.800 [2024-07-15 17:14:30.460706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.800 [2024-07-15 17:14:30.460656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.366 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.366 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:20.366 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:20.625 I/O targets: 00:08:20.625 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:20.625 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:20.625 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.625 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.625 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.625 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:20.625 00:08:20.625 00:08:20.625 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.625 http://cunit.sourceforge.net/ 00:08:20.625 00:08:20.625 00:08:20.625 Suite: bdevio tests on: Nvme3n1 00:08:20.625 Test: blockdev write read block ...passed 00:08:20.625 Test: blockdev write zeroes read block ...passed 00:08:20.625 Test: blockdev write zeroes read no split ...passed 00:08:20.625 Test: blockdev write zeroes read split ...passed 00:08:20.625 Test: blockdev write zeroes read split partial ...passed 00:08:20.625 Test: blockdev reset ...[2024-07-15 17:14:31.281419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:20.625 [2024-07-15 17:14:31.284400] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.625 passed 00:08:20.625 Test: blockdev write read 8 blocks ...passed 00:08:20.625 Test: blockdev write read size > 128k ...passed 00:08:20.625 Test: blockdev write read invalid size ...passed 00:08:20.625 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.625 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.625 Test: blockdev write read max offset ...passed 00:08:20.625 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.625 Test: blockdev writev readv 8 blocks ...passed 00:08:20.625 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.625 Test: blockdev writev readv block ...passed 00:08:20.625 Test: blockdev writev readv size > 128k ...passed 00:08:20.625 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.625 Test: blockdev comparev and writev ...[2024-07-15 17:14:31.292302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca004000 len:0x1000 00:08:20.625 [2024-07-15 17:14:31.292399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.625 passed 00:08:20.625 Test: blockdev nvme passthru rw ...passed 00:08:20.625 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:14:31.293600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.625 [2024-07-15 17:14:31.293642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.625 passed 00:08:20.625 Test: blockdev nvme admin passthru ...passed 00:08:20.625 Test: blockdev copy ...passed 00:08:20.626 Suite: bdevio tests on: Nvme2n3 00:08:20.626 Test: blockdev write read block ...passed 00:08:20.626 Test: blockdev write zeroes read block ...passed 00:08:20.626 Test: blockdev write zeroes read no split ...passed 00:08:20.626 Test: blockdev write zeroes read split ...passed 00:08:20.626 Test: blockdev write zeroes read split partial ...passed 00:08:20.626 Test: blockdev reset ...[2024-07-15 17:14:31.319248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:20.626 passed 00:08:20.626 Test: blockdev write read 8 blocks ...[2024-07-15 17:14:31.322570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.626 passed 00:08:20.626 Test: blockdev write read size > 128k ...passed 00:08:20.626 Test: blockdev write read invalid size ...passed 00:08:20.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.626 Test: blockdev write read max offset ...passed 00:08:20.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.626 Test: blockdev writev readv 8 blocks ...passed 00:08:20.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.626 Test: blockdev writev readv block ...passed 00:08:20.626 Test: blockdev writev readv size > 128k ...passed 00:08:20.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.626 Test: blockdev comparev and writev ...[2024-07-15 17:14:31.329849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca002000 len:0x1000 00:08:20.626 [2024-07-15 17:14:31.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev nvme passthru rw ...passed 00:08:20.626 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.626 Test: blockdev nvme admin passthru ...[2024-07-15 17:14:31.330954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.626 [2024-07-15 17:14:31.331002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev copy ...passed 00:08:20.626 Suite: bdevio tests on: Nvme2n2 00:08:20.626 Test: blockdev write read block ...passed 00:08:20.626 Test: blockdev write zeroes read block ...passed 00:08:20.626 Test: blockdev write zeroes read no split ...passed 00:08:20.626 Test: blockdev write zeroes read split ...passed 00:08:20.626 Test: blockdev write zeroes read split partial ...passed 00:08:20.626 Test: blockdev reset ...[2024-07-15 17:14:31.355014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:20.626 passed 00:08:20.626 Test: blockdev write read 8 blocks ...[2024-07-15 17:14:31.358771] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.626 passed 00:08:20.626 Test: blockdev write read size > 128k ...passed 00:08:20.626 Test: blockdev write read invalid size ...passed 00:08:20.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.626 Test: blockdev write read max offset ...passed 00:08:20.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.626 Test: blockdev writev readv 8 blocks ...passed 00:08:20.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.626 Test: blockdev writev readv block ...passed 00:08:20.626 Test: blockdev writev readv size > 128k ...passed 00:08:20.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.626 Test: blockdev comparev and writev ...[2024-07-15 17:14:31.366644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca00c000 len:0x1000 00:08:20.626 [2024-07-15 17:14:31.366718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev nvme passthru rw ...passed 00:08:20.626 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:14:31.367797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.626 [2024-07-15 17:14:31.367844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev nvme admin passthru ...passed 00:08:20.626 Test: blockdev copy ...passed 00:08:20.626 Suite: bdevio tests on: Nvme2n1 00:08:20.626 Test: blockdev write read block ...passed 00:08:20.626 Test: blockdev write zeroes read block ...passed 00:08:20.626 Test: blockdev write zeroes read no split ...passed 00:08:20.626 Test: blockdev write zeroes read split ...passed 00:08:20.626 Test: blockdev write zeroes read split partial ...passed 00:08:20.626 Test: blockdev reset ...[2024-07-15 17:14:31.393413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:20.626 passed 00:08:20.626 Test: blockdev write read 8 blocks ...[2024-07-15 17:14:31.396984] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.626 passed 00:08:20.626 Test: blockdev write read size > 128k ...passed 00:08:20.626 Test: blockdev write read invalid size ...passed 00:08:20.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.626 Test: blockdev write read max offset ...passed 00:08:20.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.626 Test: blockdev writev readv 8 blocks ...passed 00:08:20.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.626 Test: blockdev writev readv block ...passed 00:08:20.626 Test: blockdev writev readv size > 128k ...passed 00:08:20.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.626 Test: blockdev comparev and writev ...[2024-07-15 17:14:31.403718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c36000 len:0x1000 00:08:20.626 [2024-07-15 17:14:31.403791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev nvme passthru rw ...passed 00:08:20.626 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.626 Test: blockdev nvme admin passthru ...[2024-07-15 17:14:31.404773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.626 [2024-07-15 17:14:31.404826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.626 passed 00:08:20.626 Test: blockdev copy ...passed 00:08:20.626 Suite: bdevio tests on: Nvme1n1 00:08:20.626 Test: blockdev write read block ...passed 00:08:20.626 Test: blockdev write zeroes read block ...passed 00:08:20.626 Test: blockdev write zeroes read no split ...passed 00:08:20.626 Test: blockdev write zeroes read split ...passed 00:08:20.626 Test: blockdev write zeroes read split partial ...passed 00:08:20.626 Test: blockdev reset ...[2024-07-15 17:14:31.433471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:20.626 [2024-07-15 17:14:31.436745] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.626 passed 00:08:20.626 Test: blockdev write read 8 blocks ...passed 00:08:20.626 Test: blockdev write read size > 128k ...passed 00:08:20.626 Test: blockdev write read invalid size ...passed 00:08:20.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.626 Test: blockdev write read max offset ...passed 00:08:20.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.626 Test: blockdev writev readv 8 blocks ...passed 00:08:20.626 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.627 Test: blockdev writev readv block ...passed 00:08:20.627 Test: blockdev writev readv size > 128k ...passed 00:08:20.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.627 Test: blockdev comparev and writev ...[2024-07-15 17:14:31.446391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c32000 len:0x1000 00:08:20.627 [2024-07-15 17:14:31.446484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.627 passed 00:08:20.627 Test: blockdev nvme passthru rw ...passed 00:08:20.627 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:14:31.447382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.627 [2024-07-15 17:14:31.447422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.627 passed 00:08:20.627 Test: blockdev nvme admin passthru ...passed 00:08:20.627 Test: blockdev copy ...passed 00:08:20.627 Suite: bdevio tests on: Nvme0n1 00:08:20.627 Test: blockdev write read block ...passed 00:08:20.627 Test: blockdev write zeroes read block ...passed 00:08:20.627 Test: blockdev write zeroes read no split ...passed 00:08:20.627 Test: blockdev write zeroes read split ...passed 00:08:20.627 Test: blockdev write zeroes read split partial ...passed 00:08:20.627 Test: blockdev reset ...[2024-07-15 17:14:31.472426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:20.627 [2024-07-15 17:14:31.475476] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.627 passed 00:08:20.627 Test: blockdev write read 8 blocks ...passed 00:08:20.627 Test: blockdev write read size > 128k ...passed 00:08:20.627 Test: blockdev write read invalid size ...passed 00:08:20.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.627 Test: blockdev write read max offset ...passed 00:08:20.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.886 Test: blockdev writev readv 8 blocks ...passed 00:08:20.886 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.886 Test: blockdev writev readv block ...passed 00:08:20.886 Test: blockdev writev readv size > 128k ...passed 00:08:20.886 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.886 Test: blockdev comparev and writev ...passed 00:08:20.886 Test: blockdev nvme passthru rw ...[2024-07-15 17:14:31.482785] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:20.886 separate metadata which is not supported yet. 00:08:20.886 passed 00:08:20.886 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.886 Test: blockdev nvme admin passthru ...[2024-07-15 17:14:31.483476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:20.886 [2024-07-15 17:14:31.483533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:20.886 passed 00:08:20.886 Test: blockdev copy ...passed 00:08:20.886 00:08:20.886 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.886 suites 6 6 n/a 0 0 00:08:20.886 tests 138 138 138 0 0 00:08:20.886 asserts 893 893 893 0 n/a 00:08:20.886 00:08:20.886 Elapsed time = 0.529 seconds 00:08:20.886 0 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79186 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 79186 ']' 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 79186 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79186 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79186' 00:08:20.886 killing process with pid 79186 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 79186 00:08:20.886 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 79186 00:08:21.146 17:14:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:21.146 00:08:21.146 real 0m1.813s 00:08:21.146 user 0m4.298s 00:08:21.146 sys 0m0.465s 00:08:21.146 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.146 17:14:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:21.146 ************************************ 00:08:21.146 END TEST bdev_bounds 00:08:21.146 ************************************ 00:08:21.146 17:14:31 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:21.146 17:14:31 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:21.146 17:14:31 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:21.146 17:14:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.146 17:14:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.146 ************************************ 00:08:21.146 START TEST bdev_nbd 00:08:21.146 ************************************ 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79240 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:21.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79240 /var/tmp/spdk-nbd.sock 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 79240 ']' 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.146 17:14:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:21.405 [2024-07-15 17:14:32.034911] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:21.405 [2024-07-15 17:14:32.035114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.405 [2024-07-15 17:14:32.189989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.405 [2024-07-15 17:14:32.213465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.664 [2024-07-15 17:14:32.353835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:22.230 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:22.794 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.795 1+0 records in 00:08:22.795 1+0 records out 00:08:22.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771819 s, 5.3 MB/s 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:22.795 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.052 1+0 records in 00:08:23.052 1+0 records out 00:08:23.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681267 s, 6.0 MB/s 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.052 17:14:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.309 1+0 records in 00:08:23.309 1+0 records out 00:08:23.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497421 s, 8.2 MB/s 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.309 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:23.567 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:23.567 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.825 1+0 records in 00:08:23.825 1+0 records out 00:08:23.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111921 s, 3.7 MB/s 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.825 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.147 1+0 records in 00:08:24.147 1+0 records out 00:08:24.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132031 s, 3.1 MB/s 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.147 17:14:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.424 1+0 records in 00:08:24.424 1+0 records out 00:08:24.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680668 s, 6.0 MB/s 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.424 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.682 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:24.682 { 00:08:24.682 "nbd_device": "/dev/nbd0", 00:08:24.682 "bdev_name": "Nvme0n1" 00:08:24.682 }, 00:08:24.682 { 00:08:24.682 "nbd_device": "/dev/nbd1", 00:08:24.682 "bdev_name": "Nvme1n1" 00:08:24.682 }, 00:08:24.682 { 00:08:24.683 "nbd_device": "/dev/nbd2", 00:08:24.683 "bdev_name": "Nvme2n1" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd3", 00:08:24.683 "bdev_name": "Nvme2n2" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd4", 00:08:24.683 "bdev_name": "Nvme2n3" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd5", 00:08:24.683 "bdev_name": "Nvme3n1" 00:08:24.683 } 00:08:24.683 ]' 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd0", 00:08:24.683 "bdev_name": "Nvme0n1" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd1", 00:08:24.683 "bdev_name": "Nvme1n1" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd2", 00:08:24.683 "bdev_name": "Nvme2n1" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd3", 00:08:24.683 "bdev_name": "Nvme2n2" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd4", 00:08:24.683 "bdev_name": "Nvme2n3" 00:08:24.683 }, 00:08:24.683 { 00:08:24.683 "nbd_device": "/dev/nbd5", 00:08:24.683 "bdev_name": "Nvme3n1" 00:08:24.683 } 00:08:24.683 ]' 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.683 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.942 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.200 17:14:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.458 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.717 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.975 17:14:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.234 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.492 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.492 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.492 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.492 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.749 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:26.750 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:27.008 /dev/nbd0 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.008 1+0 records in 00:08:27.008 1+0 records out 00:08:27.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794197 s, 5.2 MB/s 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.008 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:27.266 /dev/nbd1 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.266 1+0 records in 00:08:27.266 1+0 records out 00:08:27.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598093 s, 6.8 MB/s 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.266 17:14:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.266 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:27.523 /dev/nbd10 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.523 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.524 1+0 records in 00:08:27.524 1+0 records out 00:08:27.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752136 s, 5.4 MB/s 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.524 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:27.849 /dev/nbd11 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.849 1+0 records in 00:08:27.849 1+0 records out 00:08:27.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078268 s, 5.2 MB/s 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.849 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:28.107 /dev/nbd12 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.107 1+0 records in 00:08:28.107 1+0 records out 00:08:28.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813127 s, 5.0 MB/s 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.107 17:14:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:28.365 /dev/nbd13 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.365 1+0 records in 00:08:28.365 1+0 records out 00:08:28.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714966 s, 5.7 MB/s 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.365 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.623 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd0", 00:08:28.623 "bdev_name": "Nvme0n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd1", 00:08:28.623 "bdev_name": "Nvme1n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd10", 00:08:28.623 "bdev_name": "Nvme2n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd11", 00:08:28.623 "bdev_name": "Nvme2n2" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd12", 00:08:28.623 "bdev_name": "Nvme2n3" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd13", 00:08:28.623 "bdev_name": "Nvme3n1" 00:08:28.623 } 00:08:28.623 ]' 00:08:28.623 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd0", 00:08:28.623 "bdev_name": "Nvme0n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd1", 00:08:28.623 "bdev_name": "Nvme1n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd10", 00:08:28.623 "bdev_name": "Nvme2n1" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd11", 00:08:28.623 "bdev_name": "Nvme2n2" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd12", 00:08:28.623 "bdev_name": "Nvme2n3" 00:08:28.623 }, 00:08:28.623 { 00:08:28.623 "nbd_device": "/dev/nbd13", 00:08:28.623 "bdev_name": "Nvme3n1" 00:08:28.623 } 00:08:28.624 ]' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.624 /dev/nbd1 00:08:28.624 /dev/nbd10 00:08:28.624 /dev/nbd11 00:08:28.624 /dev/nbd12 00:08:28.624 /dev/nbd13' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.624 /dev/nbd1 00:08:28.624 /dev/nbd10 00:08:28.624 /dev/nbd11 00:08:28.624 /dev/nbd12 00:08:28.624 /dev/nbd13' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:28.624 256+0 records in 00:08:28.624 256+0 records out 00:08:28.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00975377 s, 108 MB/s 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.624 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:28.881 256+0 records in 00:08:28.881 256+0 records out 00:08:28.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141953 s, 7.4 MB/s 00:08:28.881 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.881 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.139 256+0 records in 00:08:29.139 256+0 records out 00:08:29.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146193 s, 7.2 MB/s 00:08:29.139 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.139 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.139 256+0 records in 00:08:29.139 256+0 records out 00:08:29.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174252 s, 6.0 MB/s 00:08:29.139 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.139 17:14:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.397 256+0 records in 00:08:29.397 256+0 records out 00:08:29.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121869 s, 8.6 MB/s 00:08:29.397 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.397 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:29.397 256+0 records in 00:08:29.397 256+0 records out 00:08:29.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159566 s, 6.6 MB/s 00:08:29.397 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.397 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:29.655 256+0 records in 00:08:29.655 256+0 records out 00:08:29.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136985 s, 7.7 MB/s 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.655 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.912 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.169 17:14:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.425 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:30.682 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.940 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:31.204 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.205 17:14:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.492 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:31.750 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:32.007 malloc_lvol_verify 00:08:32.007 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:32.265 5ce1955d-b92d-4b3a-815d-c109805dc1a3 00:08:32.266 17:14:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:32.524 38fbc949-4177-4694-9da1-1fcdc77ec6fb 00:08:32.524 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.782 /dev/nbd0 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:32.782 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.782 Discarding device blocks: 0/4096 done 00:08:32.782 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.782 00:08:32.782 Allocating group tables: 0/1 done 00:08:32.782 Writing inode tables: 0/1 done 00:08:32.782 Creating journal (1024 blocks): done 00:08:32.782 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.782 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.782 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79240 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 79240 ']' 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 79240 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79240 00:08:33.041 killing process with pid 79240 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79240' 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 79240 00:08:33.041 17:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 79240 00:08:33.607 17:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:33.607 00:08:33.607 real 0m12.404s 00:08:33.607 user 0m17.890s 00:08:33.607 sys 0m4.280s 00:08:33.607 17:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.607 ************************************ 00:08:33.607 END TEST bdev_nbd 00:08:33.607 ************************************ 00:08:33.607 17:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:33.607 17:14:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:33.607 17:14:44 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:33.607 17:14:44 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:33.607 skipping fio tests on NVMe due to multi-ns failures. 00:08:33.607 17:14:44 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:33.607 17:14:44 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:33.607 17:14:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.607 17:14:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:33.607 17:14:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.607 17:14:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.607 ************************************ 00:08:33.607 START TEST bdev_verify 00:08:33.607 ************************************ 00:08:33.607 17:14:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.865 [2024-07-15 17:14:44.487232] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:33.865 [2024-07-15 17:14:44.487450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79642 ] 00:08:33.865 [2024-07-15 17:14:44.636135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.865 [2024-07-15 17:14:44.655217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.123 [2024-07-15 17:14:44.790613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.123 [2024-07-15 17:14:44.790653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.689 Running I/O for 5 seconds... 00:08:39.985 00:08:39.985 Latency(us) 00:08:39.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0xbd0bd 00:08:39.985 Nvme0n1 : 5.08 1587.71 6.20 0.00 0.00 80445.64 18707.55 81026.33 00:08:39.985 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:39.985 Nvme0n1 : 5.05 1572.65 6.14 0.00 0.00 81047.63 18707.55 79119.83 00:08:39.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0xa0000 00:08:39.985 Nvme1n1 : 5.08 1587.02 6.20 0.00 0.00 80365.16 20137.43 78166.57 00:08:39.985 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0xa0000 length 0xa0000 00:08:39.985 Nvme1n1 : 5.07 1576.67 6.16 0.00 0.00 80609.71 8162.21 78643.20 00:08:39.985 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0x80000 00:08:39.985 Nvme2n1 : 5.08 1586.27 6.20 0.00 0.00 80264.50 18469.24 73876.95 00:08:39.985 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x80000 length 0x80000 00:08:39.985 Nvme2n1 : 5.09 1585.22 6.19 0.00 0.00 80184.69 9889.98 78166.57 00:08:39.985 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0x80000 00:08:39.985 Nvme2n2 : 5.09 1585.50 6.19 0.00 0.00 80158.00 17158.52 75306.82 00:08:39.985 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x80000 length 0x80000 00:08:39.985 Nvme2n2 : 5.09 1584.61 6.19 0.00 0.00 80055.52 10128.29 75306.82 00:08:39.985 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0x80000 00:08:39.985 Nvme2n3 : 5.09 1584.72 6.19 0.00 0.00 80051.98 16324.42 78166.57 00:08:39.985 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x80000 length 0x80000 00:08:39.985 Nvme2n3 : 5.09 1583.79 6.19 0.00 0.00 79933.24 11141.12 73876.95 00:08:39.985 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x0 length 0x20000 00:08:39.985 Nvme3n1 : 5.09 1583.87 6.19 0.00 0.00 79945.18 11379.43 80073.08 00:08:39.985 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:39.985 Verification LBA range: start 0x20000 length 0x20000 00:08:39.985 Nvme3n1 : 5.09 1583.00 6.18 0.00 0.00 79866.86 10366.60 78166.57 00:08:39.985 =================================================================================================================== 00:08:39.985 Total : 19001.03 74.22 0.00 0.00 80242.70 8162.21 81026.33 00:08:40.244 00:08:40.244 real 0m6.664s 00:08:40.244 user 0m12.215s 00:08:40.244 sys 0m0.351s 00:08:40.244 17:14:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.244 17:14:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:40.244 ************************************ 00:08:40.244 END TEST bdev_verify 00:08:40.244 ************************************ 00:08:40.502 17:14:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:40.502 17:14:51 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:40.502 17:14:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:40.502 17:14:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.502 17:14:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.502 ************************************ 00:08:40.502 START TEST bdev_verify_big_io 00:08:40.502 ************************************ 00:08:40.502 17:14:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:40.502 [2024-07-15 17:14:51.210806] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:40.502 [2024-07-15 17:14:51.210994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79735 ] 00:08:40.759 [2024-07-15 17:14:51.363594] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.759 [2024-07-15 17:14:51.383043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.759 [2024-07-15 17:14:51.513342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.759 [2024-07-15 17:14:51.513437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.324 Running I/O for 5 seconds... 00:08:47.906 00:08:47.906 Latency(us) 00:08:47.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.906 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0xbd0b 00:08:47.906 Nvme0n1 : 5.73 130.57 8.16 0.00 0.00 963030.73 29789.09 1014258.97 00:08:47.906 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:47.906 Nvme0n1 : 5.60 114.31 7.14 0.00 0.00 1068018.50 16801.05 1151527.10 00:08:47.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0xa000 00:08:47.906 Nvme1n1 : 5.73 130.84 8.18 0.00 0.00 938808.52 34555.35 1060015.01 00:08:47.906 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0xa000 length 0xa000 00:08:47.906 Nvme1n1 : 5.73 115.36 7.21 0.00 0.00 1015260.89 105810.85 937998.89 00:08:47.906 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0x8000 00:08:47.906 Nvme2n1 : 5.73 130.78 8.17 0.00 0.00 917130.64 35508.60 1075267.03 00:08:47.906 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x8000 length 0x8000 00:08:47.906 Nvme2n1 : 5.83 127.83 7.99 0.00 0.00 899413.27 26571.87 907494.87 00:08:47.906 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0x8000 00:08:47.906 Nvme2n2 : 5.74 131.08 8.19 0.00 0.00 893116.24 36938.47 1098145.05 00:08:47.906 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x8000 length 0x8000 00:08:47.906 Nvme2n2 : 5.83 128.13 8.01 0.00 0.00 862488.61 27763.43 934185.89 00:08:47.906 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0x8000 00:08:47.906 Nvme2n3 : 5.74 130.51 8.16 0.00 0.00 873441.12 36223.53 1121023.07 00:08:47.906 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x8000 length 0x8000 00:08:47.906 Nvme2n3 : 5.92 147.42 9.21 0.00 0.00 737826.78 18588.39 1082893.03 00:08:47.906 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x0 length 0x2000 00:08:47.906 Nvme3n1 : 5.74 137.19 8.57 0.00 0.00 812642.87 1333.06 1136275.08 00:08:47.906 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:47.906 Verification LBA range: start 0x2000 length 0x2000 00:08:47.906 Nvme3n1 : 5.98 168.43 10.53 0.00 0.00 627111.08 1608.61 1998013.91 00:08:47.906 =================================================================================================================== 00:08:47.906 Total : 1592.45 99.53 0.00 0.00 871126.99 1333.06 1998013.91 00:08:48.471 00:08:48.471 real 0m8.151s 00:08:48.471 user 0m15.089s 00:08:48.471 sys 0m0.419s 00:08:48.471 17:14:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.471 17:14:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:48.471 ************************************ 00:08:48.471 END TEST bdev_verify_big_io 00:08:48.471 ************************************ 00:08:48.471 17:14:59 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:48.471 17:14:59 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.471 17:14:59 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:48.471 17:14:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.471 17:14:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.471 ************************************ 00:08:48.471 START TEST bdev_write_zeroes 00:08:48.471 ************************************ 00:08:48.471 17:14:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.728 [2024-07-15 17:14:59.418982] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:48.728 [2024-07-15 17:14:59.419199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79839 ] 00:08:48.728 [2024-07-15 17:14:59.573212] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.986 [2024-07-15 17:14:59.594529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.986 [2024-07-15 17:14:59.727935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.551 Running I/O for 1 seconds... 00:08:50.495 00:08:50.495 Latency(us) 00:08:50.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.495 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme0n1 : 1.02 7236.59 28.27 0.00 0.00 17619.98 5868.45 61961.31 00:08:50.495 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme1n1 : 1.02 7281.97 28.45 0.00 0.00 17482.19 12630.57 46232.67 00:08:50.495 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme2n1 : 1.02 7270.66 28.40 0.00 0.00 17433.69 11558.17 47185.92 00:08:50.495 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme2n2 : 1.02 7310.71 28.56 0.00 0.00 17320.01 9413.35 47424.23 00:08:50.495 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme2n3 : 1.03 7299.73 28.51 0.00 0.00 17312.22 9651.67 47662.55 00:08:50.495 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:50.495 Nvme3n1 : 1.03 7288.83 28.47 0.00 0.00 17264.75 9294.20 47424.23 00:08:50.495 =================================================================================================================== 00:08:50.495 Total : 43688.47 170.66 0.00 0.00 17404.74 5868.45 61961.31 00:08:51.062 00:08:51.062 real 0m2.353s 00:08:51.062 user 0m1.881s 00:08:51.062 sys 0m0.350s 00:08:51.062 17:15:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.062 17:15:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:51.062 ************************************ 00:08:51.062 END TEST bdev_write_zeroes 00:08:51.062 ************************************ 00:08:51.062 17:15:01 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:51.062 17:15:01 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.062 17:15:01 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:51.062 17:15:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.062 17:15:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.062 ************************************ 00:08:51.062 START TEST bdev_json_nonenclosed 00:08:51.062 ************************************ 00:08:51.062 17:15:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.062 [2024-07-15 17:15:01.830873] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:51.062 [2024-07-15 17:15:01.831108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79881 ] 00:08:51.320 [2024-07-15 17:15:01.985939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.320 [2024-07-15 17:15:02.007752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.320 [2024-07-15 17:15:02.145518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.320 [2024-07-15 17:15:02.145700] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:51.320 [2024-07-15 17:15:02.145751] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:51.320 [2024-07-15 17:15:02.145774] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.579 00:08:51.579 real 0m0.594s 00:08:51.579 user 0m0.325s 00:08:51.579 sys 0m0.163s 00:08:51.579 17:15:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:51.579 17:15:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.579 17:15:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:51.579 ************************************ 00:08:51.579 END TEST bdev_json_nonenclosed 00:08:51.579 ************************************ 00:08:51.579 17:15:02 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:51.579 17:15:02 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:08:51.579 17:15:02 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.579 17:15:02 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:51.579 17:15:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.579 17:15:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.579 ************************************ 00:08:51.579 START TEST bdev_json_nonarray 00:08:51.579 ************************************ 00:08:51.579 17:15:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.837 [2024-07-15 17:15:02.482739] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:51.837 [2024-07-15 17:15:02.482992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79912 ] 00:08:51.837 [2024-07-15 17:15:02.639334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.837 [2024-07-15 17:15:02.658936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.096 [2024-07-15 17:15:02.808028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.096 [2024-07-15 17:15:02.808271] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:52.096 [2024-07-15 17:15:02.808348] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:52.096 [2024-07-15 17:15:02.808436] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.355 00:08:52.355 real 0m0.614s 00:08:52.355 user 0m0.347s 00:08:52.355 sys 0m0.160s 00:08:52.355 17:15:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:52.355 17:15:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.355 ************************************ 00:08:52.355 END TEST bdev_json_nonarray 00:08:52.355 ************************************ 00:08:52.355 17:15:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:52.355 17:15:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:52.355 17:15:03 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:52.355 ************************************ 00:08:52.355 END TEST blockdev_nvme 00:08:52.355 ************************************ 00:08:52.355 00:08:52.355 real 0m36.668s 00:08:52.355 user 0m55.473s 00:08:52.355 sys 0m7.401s 00:08:52.355 17:15:03 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.355 17:15:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.355 17:15:03 -- common/autotest_common.sh@1142 -- # return 0 00:08:52.355 17:15:03 -- spdk/autotest.sh@213 -- # uname -s 00:08:52.355 17:15:03 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:52.355 17:15:03 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:52.355 17:15:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.355 17:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.355 17:15:03 -- common/autotest_common.sh@10 -- # set +x 00:08:52.355 ************************************ 00:08:52.355 START TEST blockdev_nvme_gpt 00:08:52.355 ************************************ 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:52.355 * Looking for test storage... 00:08:52.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79988 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:52.355 17:15:03 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 79988 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 79988 ']' 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.355 17:15:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-07-15 17:15:03.323580] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:08:52.629 [2024-07-15 17:15:03.324285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79988 ] 00:08:52.629 [2024-07-15 17:15:03.478947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:52.890 [2024-07-15 17:15:03.501534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.890 [2024-07-15 17:15:03.609021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.844 17:15:04 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.844 17:15:04 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:08:53.844 17:15:04 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:53.844 17:15:04 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:53.844 17:15:04 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.102 Waiting for block devices as requested 00:08:54.360 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.360 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.360 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.619 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.884 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:59.884 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.884 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:59.885 BYT; 00:08:59.885 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:59.885 BYT; 00:08:59.885 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:59.885 17:15:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:59.885 17:15:10 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:00.818 The operation has completed successfully. 00:09:00.818 17:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:01.752 The operation has completed successfully. 00:09:01.752 17:15:12 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.882 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.882 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.882 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:03.140 17:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.140 17:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.140 [] 00:09:03.140 17:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.140 17:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:03.141 17:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.141 17:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:03.399 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.399 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.659 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.659 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:03.659 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:03.660 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fa3e4f1f-60e5-4c1e-a633-b3cd8ef2c31f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fa3e4f1f-60e5-4c1e-a633-b3cd8ef2c31f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e112753b-1cdc-442b-96ba-1701c79c8c3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e112753b-1cdc-442b-96ba-1701c79c8c3a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "17b0f49b-0a97-4cef-9a30-2048d9238fb2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17b0f49b-0a97-4cef-9a30-2048d9238fb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f759fd96-8e6e-4e2a-917a-82f3472ca8f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f759fd96-8e6e-4e2a-917a-82f3472ca8f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ebae3667-de3c-47ae-bd49-cb982c1c2217"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ebae3667-de3c-47ae-bd49-cb982c1c2217",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:03.660 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:03.660 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:03.660 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:03.660 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 79988 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 79988 ']' 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 79988 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79988 00:09:03.660 killing process with pid 79988 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79988' 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 79988 00:09:03.660 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 79988 00:09:04.226 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:04.226 17:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:04.226 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:04.226 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.226 17:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.226 ************************************ 00:09:04.226 START TEST bdev_hello_world 00:09:04.226 ************************************ 00:09:04.226 17:15:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:04.226 [2024-07-15 17:15:14.995673] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:04.226 [2024-07-15 17:15:14.996606] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80595 ] 00:09:04.484 [2024-07-15 17:15:15.149546] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.484 [2024-07-15 17:15:15.172626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.484 [2024-07-15 17:15:15.279634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.049 [2024-07-15 17:15:15.709500] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:05.049 [2024-07-15 17:15:15.709573] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:05.049 [2024-07-15 17:15:15.709622] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:05.049 [2024-07-15 17:15:15.712392] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:05.049 [2024-07-15 17:15:15.713040] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:05.049 [2024-07-15 17:15:15.713083] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:05.049 [2024-07-15 17:15:15.713305] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:05.049 00:09:05.049 [2024-07-15 17:15:15.713341] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:05.306 00:09:05.306 real 0m1.109s 00:09:05.306 user 0m0.722s 00:09:05.306 sys 0m0.279s 00:09:05.306 ************************************ 00:09:05.306 END TEST bdev_hello_world 00:09:05.306 ************************************ 00:09:05.306 17:15:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.307 17:15:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:05.307 17:15:16 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:05.307 17:15:16 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:05.307 17:15:16 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.307 17:15:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.307 17:15:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:05.307 ************************************ 00:09:05.307 START TEST bdev_bounds 00:09:05.307 ************************************ 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:05.307 Process bdevio pid: 80631 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=80631 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 80631' 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 80631 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 80631 ']' 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.307 17:15:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:05.307 [2024-07-15 17:15:16.119025] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:05.307 [2024-07-15 17:15:16.119429] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80631 ] 00:09:05.565 [2024-07-15 17:15:16.262722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.565 [2024-07-15 17:15:16.282416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.565 [2024-07-15 17:15:16.390046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.565 [2024-07-15 17:15:16.390160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.565 [2024-07-15 17:15:16.390215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.498 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.498 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:06.498 17:15:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:06.498 I/O targets: 00:09:06.498 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:06.498 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:06.498 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:06.498 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.498 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.498 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.498 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:06.498 00:09:06.498 00:09:06.498 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.498 http://cunit.sourceforge.net/ 00:09:06.498 00:09:06.498 00:09:06.498 Suite: bdevio tests on: Nvme3n1 00:09:06.498 Test: blockdev write read block ...passed 00:09:06.498 Test: blockdev write zeroes read block ...passed 00:09:06.498 Test: blockdev write zeroes read no split ...passed 00:09:06.498 Test: blockdev write zeroes read split ...passed 00:09:06.498 Test: blockdev write zeroes read split partial ...passed 00:09:06.498 Test: blockdev reset ...[2024-07-15 17:15:17.317433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:06.498 passed 00:09:06.498 Test: blockdev write read 8 blocks ...[2024-07-15 17:15:17.319919] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.498 passed 00:09:06.498 Test: blockdev write read size > 128k ...passed 00:09:06.498 Test: blockdev write read invalid size ...passed 00:09:06.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.498 Test: blockdev write read max offset ...passed 00:09:06.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.498 Test: blockdev writev readv 8 blocks ...passed 00:09:06.498 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.498 Test: blockdev writev readv block ...passed 00:09:06.498 Test: blockdev writev readv size > 128k ...passed 00:09:06.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.498 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.326294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d963d000 len:0x1000 00:09:06.499 [2024-07-15 17:15:17.326467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.499 passed 00:09:06.499 Test: blockdev nvme passthru rw ...passed 00:09:06.499 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:15:17.327517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.499 [2024-07-15 17:15:17.327563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.499 passed 00:09:06.499 Test: blockdev nvme admin passthru ...passed 00:09:06.499 Test: blockdev copy ...passed 00:09:06.499 Suite: bdevio tests on: Nvme2n3 00:09:06.499 Test: blockdev write read block ...passed 00:09:06.499 Test: blockdev write zeroes read block ...passed 00:09:06.499 Test: blockdev write zeroes read no split ...passed 00:09:06.499 Test: blockdev write zeroes read split ...passed 00:09:06.756 Test: blockdev write zeroes read split partial ...passed 00:09:06.756 Test: blockdev reset ...[2024-07-15 17:15:17.358138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.756 [2024-07-15 17:15:17.360814] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.756 passed 00:09:06.756 Test: blockdev write read 8 blocks ...passed 00:09:06.756 Test: blockdev write read size > 128k ...passed 00:09:06.756 Test: blockdev write read invalid size ...passed 00:09:06.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.756 Test: blockdev write read max offset ...passed 00:09:06.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.756 Test: blockdev writev readv 8 blocks ...passed 00:09:06.756 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.756 Test: blockdev writev readv block ...passed 00:09:06.756 Test: blockdev writev readv size > 128k ...passed 00:09:06.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.756 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.368002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9639000 len:0x1000 00:09:06.756 [2024-07-15 17:15:17.368069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme passthru rw ...passed 00:09:06.756 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.756 Test: blockdev nvme admin passthru ...[2024-07-15 17:15:17.368925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.756 [2024-07-15 17:15:17.368980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev copy ...passed 00:09:06.756 Suite: bdevio tests on: Nvme2n2 00:09:06.756 Test: blockdev write read block ...passed 00:09:06.756 Test: blockdev write zeroes read block ...passed 00:09:06.756 Test: blockdev write zeroes read no split ...passed 00:09:06.756 Test: blockdev write zeroes read split ...passed 00:09:06.756 Test: blockdev write zeroes read split partial ...passed 00:09:06.756 Test: blockdev reset ...[2024-07-15 17:15:17.394309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.756 [2024-07-15 17:15:17.397011] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.756 passed 00:09:06.756 Test: blockdev write read 8 blocks ...passed 00:09:06.756 Test: blockdev write read size > 128k ...passed 00:09:06.756 Test: blockdev write read invalid size ...passed 00:09:06.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.756 Test: blockdev write read max offset ...passed 00:09:06.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.756 Test: blockdev writev readv 8 blocks ...passed 00:09:06.756 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.756 Test: blockdev writev readv block ...passed 00:09:06.756 Test: blockdev writev readv size > 128k ...passed 00:09:06.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.756 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.404989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9635000 len:0x1000 00:09:06.756 [2024-07-15 17:15:17.405056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme passthru rw ...passed 00:09:06.756 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:15:17.405916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.756 [2024-07-15 17:15:17.405970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme admin passthru ...passed 00:09:06.756 Test: blockdev copy ...passed 00:09:06.756 Suite: bdevio tests on: Nvme2n1 00:09:06.756 Test: blockdev write read block ...passed 00:09:06.756 Test: blockdev write zeroes read block ...passed 00:09:06.756 Test: blockdev write zeroes read no split ...passed 00:09:06.756 Test: blockdev write zeroes read split ...passed 00:09:06.756 Test: blockdev write zeroes read split partial ...passed 00:09:06.756 Test: blockdev reset ...[2024-07-15 17:15:17.432208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.756 [2024-07-15 17:15:17.434942] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.756 passed 00:09:06.756 Test: blockdev write read 8 blocks ...passed 00:09:06.756 Test: blockdev write read size > 128k ...passed 00:09:06.756 Test: blockdev write read invalid size ...passed 00:09:06.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.756 Test: blockdev write read max offset ...passed 00:09:06.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.756 Test: blockdev writev readv 8 blocks ...passed 00:09:06.756 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.756 Test: blockdev writev readv block ...passed 00:09:06.756 Test: blockdev writev readv size > 128k ...passed 00:09:06.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.756 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.443302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:06.756 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d962f000 len:0x1000 00:09:06.756 [2024-07-15 17:15:17.443517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.756 Test: blockdev nvme admin passthru ...[2024-07-15 17:15:17.444376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.756 [2024-07-15 17:15:17.444438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev copy ...passed 00:09:06.756 Suite: bdevio tests on: Nvme1n1 00:09:06.756 Test: blockdev write read block ...passed 00:09:06.756 Test: blockdev write zeroes read block ...passed 00:09:06.756 Test: blockdev write zeroes read no split ...passed 00:09:06.756 Test: blockdev write zeroes read split ...passed 00:09:06.756 Test: blockdev write zeroes read split partial ...passed 00:09:06.756 Test: blockdev reset ...[2024-07-15 17:15:17.468395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:06.756 passed 00:09:06.756 Test: blockdev write read 8 blocks ...[2024-07-15 17:15:17.470507] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.756 passed 00:09:06.756 Test: blockdev write read size > 128k ...passed 00:09:06.756 Test: blockdev write read invalid size ...passed 00:09:06.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.756 Test: blockdev write read max offset ...passed 00:09:06.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.756 Test: blockdev writev readv 8 blocks ...passed 00:09:06.756 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.756 Test: blockdev writev readv block ...passed 00:09:06.756 Test: blockdev writev readv size > 128k ...passed 00:09:06.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.756 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.477114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a0e000 len:0x1000 00:09:06.756 [2024-07-15 17:15:17.477183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme passthru rw ...passed 00:09:06.756 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:15:17.478036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.756 [2024-07-15 17:15:17.478090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.756 passed 00:09:06.756 Test: blockdev nvme admin passthru ...passed 00:09:06.756 Test: blockdev copy ...passed 00:09:06.756 Suite: bdevio tests on: Nvme0n1p2 00:09:06.756 Test: blockdev write read block ...passed 00:09:06.756 Test: blockdev write zeroes read block ...passed 00:09:06.756 Test: blockdev write zeroes read no split ...passed 00:09:06.756 Test: blockdev write zeroes read split ...passed 00:09:06.756 Test: blockdev write zeroes read split partial ...passed 00:09:06.756 Test: blockdev reset ...[2024-07-15 17:15:17.494369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:06.756 passed 00:09:06.756 Test: blockdev write read 8 blocks ...[2024-07-15 17:15:17.496475] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.756 passed 00:09:06.756 Test: blockdev write read size > 128k ...passed 00:09:06.756 Test: blockdev write read invalid size ...passed 00:09:06.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.756 Test: blockdev write read max offset ...passed 00:09:06.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.756 Test: blockdev writev readv 8 blocks ...passed 00:09:06.756 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.756 Test: blockdev writev readv block ...passed 00:09:06.756 Test: blockdev writev readv size > 128k ...passed 00:09:06.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.757 Test: blockdev comparev and writev ...passed 00:09:06.757 Test: blockdev nvme passthru rw ...passed 00:09:06.757 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.757 Test: blockdev nvme admin passthru ...passed 00:09:06.757 Test: blockdev copy ...[2024-07-15 17:15:17.501960] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:06.757 separate metadata which is not supported yet. 00:09:06.757 passed 00:09:06.757 Suite: bdevio tests on: Nvme0n1p1 00:09:06.757 Test: blockdev write read block ...passed 00:09:06.757 Test: blockdev write zeroes read block ...passed 00:09:06.757 Test: blockdev write zeroes read no split ...passed 00:09:06.757 Test: blockdev write zeroes read split ...passed 00:09:06.757 Test: blockdev write zeroes read split partial ...passed 00:09:06.757 Test: blockdev reset ...[2024-07-15 17:15:17.515535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:06.757 passed 00:09:06.757 Test: blockdev write read 8 blocks ...[2024-07-15 17:15:17.517479] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.757 passed 00:09:06.757 Test: blockdev write read size > 128k ...passed 00:09:06.757 Test: blockdev write read invalid size ...passed 00:09:06.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.757 Test: blockdev write read max offset ...passed 00:09:06.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.757 Test: blockdev writev readv 8 blocks ...passed 00:09:06.757 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.757 Test: blockdev writev readv block ...passed 00:09:06.757 Test: blockdev writev readv size > 128k ...passed 00:09:06.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.757 Test: blockdev comparev and writev ...[2024-07-15 17:15:17.522994] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:06.757 separate metadata which is not supported yet. 00:09:06.757 passed 00:09:06.757 Test: blockdev nvme passthru rw ...passed 00:09:06.757 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.757 Test: blockdev nvme admin passthru ...passed 00:09:06.757 Test: blockdev copy ...passed 00:09:06.757 00:09:06.757 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.757 suites 7 7 n/a 0 0 00:09:06.757 tests 161 161 161 0 0 00:09:06.757 asserts 1006 1006 1006 0 n/a 00:09:06.757 00:09:06.757 Elapsed time = 0.512 seconds 00:09:06.757 0 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 80631 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 80631 ']' 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 80631 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80631 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80631' 00:09:06.757 killing process with pid 80631 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 80631 00:09:06.757 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 80631 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:07.014 00:09:07.014 real 0m1.761s 00:09:07.014 user 0m4.437s 00:09:07.014 sys 0m0.409s 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:07.014 ************************************ 00:09:07.014 END TEST bdev_bounds 00:09:07.014 ************************************ 00:09:07.014 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:07.014 17:15:17 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.014 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:07.014 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.014 17:15:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.014 ************************************ 00:09:07.014 START TEST bdev_nbd 00:09:07.014 ************************************ 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=80680 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 80680 /var/tmp/spdk-nbd.sock 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 80680 ']' 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.014 17:15:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:07.270 [2024-07-15 17:15:17.930237] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:07.270 [2024-07-15 17:15:17.930673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.270 [2024-07-15 17:15:18.079747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.270 [2024-07-15 17:15:18.100928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.557 [2024-07-15 17:15:18.209102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.122 17:15:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.686 1+0 records in 00:09:08.686 1+0 records out 00:09:08.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591683 s, 6.9 MB/s 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.686 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.943 1+0 records in 00:09:08.943 1+0 records out 00:09:08.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562705 s, 7.3 MB/s 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.943 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.200 1+0 records in 00:09:09.200 1+0 records out 00:09:09.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646951 s, 6.3 MB/s 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.200 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:09.201 17:15:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:09.201 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.201 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.201 17:15:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.460 1+0 records in 00:09:09.460 1+0 records out 00:09:09.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081619 s, 5.0 MB/s 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.460 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:09.716 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:09.716 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:09.974 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.974 1+0 records in 00:09:09.975 1+0 records out 00:09:09.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772107 s, 5.3 MB/s 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.975 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.232 1+0 records in 00:09:10.232 1+0 records out 00:09:10.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075807 s, 5.4 MB/s 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.232 17:15:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.490 1+0 records in 00:09:10.490 1+0 records out 00:09:10.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689739 s, 5.9 MB/s 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.490 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.764 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd0", 00:09:10.764 "bdev_name": "Nvme0n1p1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd1", 00:09:10.764 "bdev_name": "Nvme0n1p2" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd2", 00:09:10.764 "bdev_name": "Nvme1n1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd3", 00:09:10.764 "bdev_name": "Nvme2n1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd4", 00:09:10.764 "bdev_name": "Nvme2n2" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd5", 00:09:10.764 "bdev_name": "Nvme2n3" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd6", 00:09:10.764 "bdev_name": "Nvme3n1" 00:09:10.764 } 00:09:10.764 ]' 00:09:10.764 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:10.764 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd0", 00:09:10.764 "bdev_name": "Nvme0n1p1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd1", 00:09:10.764 "bdev_name": "Nvme0n1p2" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd2", 00:09:10.764 "bdev_name": "Nvme1n1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd3", 00:09:10.764 "bdev_name": "Nvme2n1" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd4", 00:09:10.764 "bdev_name": "Nvme2n2" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd5", 00:09:10.764 "bdev_name": "Nvme2n3" 00:09:10.764 }, 00:09:10.764 { 00:09:10.764 "nbd_device": "/dev/nbd6", 00:09:10.764 "bdev_name": "Nvme3n1" 00:09:10.764 } 00:09:10.764 ]' 00:09:10.764 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.022 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.280 17:15:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.538 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.796 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.363 17:15:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.621 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.879 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:13.136 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.137 17:15:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.395 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:13.962 /dev/nbd0 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.962 1+0 records in 00:09:13.962 1+0 records out 00:09:13.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101414 s, 4.0 MB/s 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.962 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:14.230 /dev/nbd1 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.230 1+0 records in 00:09:14.230 1+0 records out 00:09:14.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736457 s, 5.6 MB/s 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.230 17:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:14.488 /dev/nbd10 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.488 1+0 records in 00:09:14.488 1+0 records out 00:09:14.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568433 s, 7.2 MB/s 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.488 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:14.746 /dev/nbd11 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.746 1+0 records in 00:09:14.746 1+0 records out 00:09:14.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733315 s, 5.6 MB/s 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.746 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:15.004 /dev/nbd12 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:15.004 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.005 1+0 records in 00:09:15.005 1+0 records out 00:09:15.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892405 s, 4.6 MB/s 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.005 17:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:15.263 /dev/nbd13 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.263 1+0 records in 00:09:15.263 1+0 records out 00:09:15.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668397 s, 6.1 MB/s 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.263 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:15.522 /dev/nbd14 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.522 1+0 records in 00:09:15.522 1+0 records out 00:09:15.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875199 s, 4.7 MB/s 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.522 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd0", 00:09:16.089 "bdev_name": "Nvme0n1p1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd1", 00:09:16.089 "bdev_name": "Nvme0n1p2" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd10", 00:09:16.089 "bdev_name": "Nvme1n1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd11", 00:09:16.089 "bdev_name": "Nvme2n1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd12", 00:09:16.089 "bdev_name": "Nvme2n2" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd13", 00:09:16.089 "bdev_name": "Nvme2n3" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd14", 00:09:16.089 "bdev_name": "Nvme3n1" 00:09:16.089 } 00:09:16.089 ]' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd0", 00:09:16.089 "bdev_name": "Nvme0n1p1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd1", 00:09:16.089 "bdev_name": "Nvme0n1p2" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd10", 00:09:16.089 "bdev_name": "Nvme1n1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd11", 00:09:16.089 "bdev_name": "Nvme2n1" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd12", 00:09:16.089 "bdev_name": "Nvme2n2" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd13", 00:09:16.089 "bdev_name": "Nvme2n3" 00:09:16.089 }, 00:09:16.089 { 00:09:16.089 "nbd_device": "/dev/nbd14", 00:09:16.089 "bdev_name": "Nvme3n1" 00:09:16.089 } 00:09:16.089 ]' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:16.089 /dev/nbd1 00:09:16.089 /dev/nbd10 00:09:16.089 /dev/nbd11 00:09:16.089 /dev/nbd12 00:09:16.089 /dev/nbd13 00:09:16.089 /dev/nbd14' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:16.089 /dev/nbd1 00:09:16.089 /dev/nbd10 00:09:16.089 /dev/nbd11 00:09:16.089 /dev/nbd12 00:09:16.089 /dev/nbd13 00:09:16.089 /dev/nbd14' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:16.089 256+0 records in 00:09:16.089 256+0 records out 00:09:16.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00833579 s, 126 MB/s 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.089 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:16.348 256+0 records in 00:09:16.348 256+0 records out 00:09:16.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151926 s, 6.9 MB/s 00:09:16.348 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.348 17:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:16.348 256+0 records in 00:09:16.348 256+0 records out 00:09:16.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163682 s, 6.4 MB/s 00:09:16.348 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.348 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:16.606 256+0 records in 00:09:16.606 256+0 records out 00:09:16.606 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158511 s, 6.6 MB/s 00:09:16.606 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.606 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:16.863 256+0 records in 00:09:16.863 256+0 records out 00:09:16.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170534 s, 6.1 MB/s 00:09:16.863 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.863 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:16.863 256+0 records in 00:09:16.863 256+0 records out 00:09:16.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15423 s, 6.8 MB/s 00:09:16.863 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.863 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:17.119 256+0 records in 00:09:17.119 256+0 records out 00:09:17.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158786 s, 6.6 MB/s 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:17.119 256+0 records in 00:09:17.119 256+0 records out 00:09:17.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157457 s, 6.7 MB/s 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:17.119 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.120 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.376 17:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.377 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.634 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.892 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.149 17:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.407 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.665 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.923 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.181 17:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:19.438 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:19.439 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:19.439 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:19.696 malloc_lvol_verify 00:09:19.696 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:19.954 385d99dc-9211-4c7f-b7cc-74d607211943 00:09:19.954 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:20.211 66d96435-4d55-43c6-a17c-aec7ff186bf2 00:09:20.211 17:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:20.469 /dev/nbd0 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:20.469 mke2fs 1.46.5 (30-Dec-2021) 00:09:20.469 Discarding device blocks: 0/4096 done 00:09:20.469 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:20.469 00:09:20.469 Allocating group tables: 0/1 done 00:09:20.469 Writing inode tables: 0/1 done 00:09:20.469 Creating journal (1024 blocks): done 00:09:20.469 Writing superblocks and filesystem accounting information: 0/1 done 00:09:20.469 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.469 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 80680 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 80680 ']' 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 80680 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80680 00:09:20.761 killing process with pid 80680 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80680' 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 80680 00:09:20.761 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 80680 00:09:21.020 17:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:21.020 00:09:21.020 real 0m13.967s 00:09:21.020 user 0m20.419s 00:09:21.020 sys 0m4.884s 00:09:21.020 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.020 17:15:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:21.020 ************************************ 00:09:21.020 END TEST bdev_nbd 00:09:21.020 ************************************ 00:09:21.020 17:15:31 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:21.020 skipping fio tests on NVMe due to multi-ns failures. 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:21.020 17:15:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.020 17:15:31 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:21.020 17:15:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.020 17:15:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:21.020 ************************************ 00:09:21.020 START TEST bdev_verify 00:09:21.020 ************************************ 00:09:21.020 17:15:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.276 [2024-07-15 17:15:31.958579] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:21.276 [2024-07-15 17:15:31.958780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81122 ] 00:09:21.276 [2024-07-15 17:15:32.110731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.276 [2024-07-15 17:15:32.131599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.534 [2024-07-15 17:15:32.225316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.534 [2024-07-15 17:15:32.225384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.099 Running I/O for 5 seconds... 00:09:27.360 00:09:27.360 Latency(us) 00:09:27.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.360 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x5e800 00:09:27.360 Nvme0n1p1 : 5.07 1312.31 5.13 0.00 0.00 97286.89 22878.02 90082.21 00:09:27.360 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x5e800 length 0x5e800 00:09:27.360 Nvme0n1p1 : 5.08 1334.69 5.21 0.00 0.00 95687.31 21567.30 92465.34 00:09:27.360 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x5e7ff 00:09:27.360 Nvme0n1p2 : 5.07 1311.58 5.12 0.00 0.00 97134.38 24903.68 86745.83 00:09:27.360 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:27.360 Nvme0n1p2 : 5.09 1333.44 5.21 0.00 0.00 95595.48 22997.18 90082.21 00:09:27.360 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0xa0000 00:09:27.360 Nvme1n1 : 5.08 1310.94 5.12 0.00 0.00 97001.68 28359.21 84362.71 00:09:27.360 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0xa0000 length 0xa0000 00:09:27.360 Nvme1n1 : 5.09 1331.59 5.20 0.00 0.00 95514.26 28240.06 86269.21 00:09:27.360 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x80000 00:09:27.360 Nvme2n1 : 5.08 1310.26 5.12 0.00 0.00 96873.64 28240.06 84839.33 00:09:27.360 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x80000 length 0x80000 00:09:27.360 Nvme2n1 : 5.10 1330.59 5.20 0.00 0.00 95401.74 29789.09 85792.58 00:09:27.360 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x80000 00:09:27.360 Nvme2n2 : 5.08 1309.63 5.12 0.00 0.00 96732.22 26929.34 85792.58 00:09:27.360 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x80000 length 0x80000 00:09:27.360 Nvme2n2 : 5.10 1329.62 5.19 0.00 0.00 95284.11 25856.93 87222.46 00:09:27.360 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x80000 00:09:27.360 Nvme2n3 : 5.09 1308.50 5.11 0.00 0.00 96623.11 20494.89 86745.83 00:09:27.360 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x80000 length 0x80000 00:09:27.360 Nvme2n3 : 5.11 1328.68 5.19 0.00 0.00 95162.66 20137.43 90558.84 00:09:27.360 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x0 length 0x20000 00:09:27.360 Nvme3n1 : 5.09 1307.84 5.11 0.00 0.00 96508.35 17158.52 87699.08 00:09:27.360 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.360 Verification LBA range: start 0x20000 length 0x20000 00:09:27.361 Nvme3n1 : 5.11 1328.02 5.19 0.00 0.00 95043.53 13941.29 92941.96 00:09:27.361 =================================================================================================================== 00:09:27.361 Total : 18487.69 72.22 0.00 0.00 96124.97 13941.29 92941.96 00:09:27.618 00:09:27.618 real 0m6.581s 00:09:27.618 user 0m12.121s 00:09:27.618 sys 0m0.306s 00:09:27.618 17:15:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.618 17:15:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:27.618 ************************************ 00:09:27.618 END TEST bdev_verify 00:09:27.618 ************************************ 00:09:27.876 17:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:27.876 17:15:38 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.876 17:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:27.876 17:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.876 17:15:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.876 ************************************ 00:09:27.876 START TEST bdev_verify_big_io 00:09:27.876 ************************************ 00:09:27.876 17:15:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.876 [2024-07-15 17:15:38.601254] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:27.876 [2024-07-15 17:15:38.601461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81209 ] 00:09:28.133 [2024-07-15 17:15:38.753917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:28.133 [2024-07-15 17:15:38.776771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.133 [2024-07-15 17:15:38.913587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.133 [2024-07-15 17:15:38.913608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.699 Running I/O for 5 seconds... 00:09:35.258 00:09:35.258 Latency(us) 00:09:35.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.258 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x5e80 00:09:35.258 Nvme0n1p1 : 5.76 116.27 7.27 0.00 0.00 1063814.14 28955.00 1105771.05 00:09:35.258 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x5e80 length 0x5e80 00:09:35.258 Nvme0n1p1 : 5.78 111.45 6.97 0.00 0.00 1096630.15 23592.96 1143901.09 00:09:35.258 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x5e7f 00:09:35.258 Nvme0n1p2 : 5.76 116.58 7.29 0.00 0.00 1038060.10 80549.70 1029510.98 00:09:35.258 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:35.258 Nvme0n1p2 : 5.78 116.21 7.26 0.00 0.00 1033638.90 108670.60 972315.93 00:09:35.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0xa000 00:09:35.258 Nvme1n1 : 5.77 116.90 7.31 0.00 0.00 1014520.26 91512.09 1121023.07 00:09:35.258 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0xa000 length 0xa000 00:09:35.258 Nvme1n1 : 5.79 114.26 7.14 0.00 0.00 1012797.72 111053.73 896055.85 00:09:35.258 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x8000 00:09:35.258 Nvme2n1 : 5.77 121.87 7.62 0.00 0.00 964477.37 40036.54 1128649.08 00:09:35.258 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x8000 length 0x8000 00:09:35.258 Nvme2n1 : 5.86 118.33 7.40 0.00 0.00 954591.38 62914.56 991380.95 00:09:35.258 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x8000 00:09:35.258 Nvme2n2 : 5.77 121.98 7.62 0.00 0.00 941590.00 40989.79 1067641.02 00:09:35.258 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x8000 length 0x8000 00:09:35.258 Nvme2n2 : 5.93 123.00 7.69 0.00 0.00 897970.37 14656.23 1906501.82 00:09:35.258 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x8000 00:09:35.258 Nvme2n3 : 5.79 129.29 8.08 0.00 0.00 874869.26 3306.59 1075267.03 00:09:35.258 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x8000 length 0x8000 00:09:35.258 Nvme2n3 : 5.96 127.11 7.94 0.00 0.00 850513.33 16324.42 1738729.66 00:09:35.258 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x0 length 0x2000 00:09:35.258 Nvme3n1 : 5.79 132.59 8.29 0.00 0.00 834644.95 9472.93 1250665.19 00:09:35.258 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:35.258 Verification LBA range: start 0x2000 length 0x2000 00:09:35.258 Nvme3n1 : 5.99 151.66 9.48 0.00 0.00 696189.54 1251.14 1776859.69 00:09:35.258 =================================================================================================================== 00:09:35.258 Total : 1717.48 107.34 0.00 0.00 938980.06 1251.14 1906501.82 00:09:35.516 00:09:35.516 real 0m7.843s 00:09:35.516 user 0m14.463s 00:09:35.516 sys 0m0.400s 00:09:35.516 17:15:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.516 17:15:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:35.516 ************************************ 00:09:35.516 END TEST bdev_verify_big_io 00:09:35.516 ************************************ 00:09:35.820 17:15:46 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:35.820 17:15:46 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.820 17:15:46 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:35.820 17:15:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.820 17:15:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:35.820 ************************************ 00:09:35.820 START TEST bdev_write_zeroes 00:09:35.820 ************************************ 00:09:35.820 17:15:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.820 [2024-07-15 17:15:46.494890] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:35.820 [2024-07-15 17:15:46.495079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81318 ] 00:09:35.820 [2024-07-15 17:15:46.643473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.081 [2024-07-15 17:15:46.667891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.081 [2024-07-15 17:15:46.806132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.646 Running I/O for 1 seconds... 00:09:37.580 00:09:37.580 Latency(us) 00:09:37.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.580 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme0n1p1 : 1.02 6642.68 25.95 0.00 0.00 19166.61 14894.55 32172.22 00:09:37.580 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme0n1p2 : 1.02 6630.69 25.90 0.00 0.00 19158.57 15192.44 31457.28 00:09:37.580 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme1n1 : 1.03 6665.89 26.04 0.00 0.00 19015.49 11677.32 27644.28 00:09:37.580 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme2n1 : 1.03 6655.65 26.00 0.00 0.00 18984.50 12332.68 27763.43 00:09:37.580 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme2n2 : 1.03 6645.40 25.96 0.00 0.00 18948.06 11617.75 26810.18 00:09:37.580 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme2n3 : 1.03 6635.23 25.92 0.00 0.00 18920.62 10902.81 26571.87 00:09:37.580 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:37.580 Nvme3n1 : 1.03 6625.14 25.88 0.00 0.00 18900.86 10664.49 26452.71 00:09:37.580 =================================================================================================================== 00:09:37.580 Total : 46500.68 181.64 0.00 0.00 19013.13 10664.49 32172.22 00:09:38.146 00:09:38.146 real 0m2.363s 00:09:38.146 user 0m1.923s 00:09:38.146 sys 0m0.319s 00:09:38.146 17:15:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.146 17:15:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:38.146 ************************************ 00:09:38.146 END TEST bdev_write_zeroes 00:09:38.146 ************************************ 00:09:38.146 17:15:48 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:38.146 17:15:48 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.146 17:15:48 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:38.146 17:15:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.146 17:15:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.146 ************************************ 00:09:38.146 START TEST bdev_json_nonenclosed 00:09:38.146 ************************************ 00:09:38.146 17:15:48 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.146 [2024-07-15 17:15:48.919402] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:38.146 [2024-07-15 17:15:48.919607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81360 ] 00:09:38.404 [2024-07-15 17:15:49.075651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:38.404 [2024-07-15 17:15:49.098133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.404 [2024-07-15 17:15:49.230693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.404 [2024-07-15 17:15:49.230858] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:38.404 [2024-07-15 17:15:49.230900] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:38.404 [2024-07-15 17:15:49.230933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.664 00:09:38.664 real 0m0.576s 00:09:38.664 user 0m0.321s 00:09:38.664 sys 0m0.150s 00:09:38.664 17:15:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:38.664 17:15:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.664 ************************************ 00:09:38.664 17:15:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:38.664 END TEST bdev_json_nonenclosed 00:09:38.664 ************************************ 00:09:38.664 17:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:38.664 17:15:49 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:09:38.664 17:15:49 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.664 17:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:38.664 17:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.664 17:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.664 ************************************ 00:09:38.664 START TEST bdev_json_nonarray 00:09:38.664 ************************************ 00:09:38.664 17:15:49 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.923 [2024-07-15 17:15:49.543741] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:38.923 [2024-07-15 17:15:49.543943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81391 ] 00:09:38.923 [2024-07-15 17:15:49.697187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:38.923 [2024-07-15 17:15:49.728094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.181 [2024-07-15 17:15:49.877013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.181 [2024-07-15 17:15:49.877207] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:39.181 [2024-07-15 17:15:49.877248] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:39.181 [2024-07-15 17:15:49.877292] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.439 00:09:39.439 real 0m0.613s 00:09:39.439 user 0m0.336s 00:09:39.439 sys 0m0.170s 00:09:39.439 17:15:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:39.439 17:15:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.439 17:15:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:39.439 ************************************ 00:09:39.439 END TEST bdev_json_nonarray 00:09:39.439 ************************************ 00:09:39.439 17:15:50 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:39.439 17:15:50 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:09:39.439 17:15:50 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:39.439 17:15:50 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:39.439 17:15:50 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:39.439 17:15:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:39.439 17:15:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.440 17:15:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:39.440 ************************************ 00:09:39.440 START TEST bdev_gpt_uuid 00:09:39.440 ************************************ 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81417 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 81417 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 81417 ']' 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.440 17:15:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:39.440 [2024-07-15 17:15:50.244726] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:39.440 [2024-07-15 17:15:50.244954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81417 ] 00:09:39.700 [2024-07-15 17:15:50.401317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.700 [2024-07-15 17:15:50.425071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.700 [2024-07-15 17:15:50.552998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.634 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.634 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:09:40.634 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:40.634 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.634 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 Some configs were skipped because the RPC state that can call them passed over. 00:09:40.892 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.892 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:40.893 { 00:09:40.893 "name": "Nvme0n1p1", 00:09:40.893 "aliases": [ 00:09:40.893 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:40.893 ], 00:09:40.893 "product_name": "GPT Disk", 00:09:40.893 "block_size": 4096, 00:09:40.893 "num_blocks": 774144, 00:09:40.893 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:40.893 "md_size": 64, 00:09:40.893 "md_interleave": false, 00:09:40.893 "dif_type": 0, 00:09:40.893 "assigned_rate_limits": { 00:09:40.893 "rw_ios_per_sec": 0, 00:09:40.893 "rw_mbytes_per_sec": 0, 00:09:40.893 "r_mbytes_per_sec": 0, 00:09:40.893 "w_mbytes_per_sec": 0 00:09:40.893 }, 00:09:40.893 "claimed": false, 00:09:40.893 "zoned": false, 00:09:40.893 "supported_io_types": { 00:09:40.893 "read": true, 00:09:40.893 "write": true, 00:09:40.893 "unmap": true, 00:09:40.893 "flush": true, 00:09:40.893 "reset": true, 00:09:40.893 "nvme_admin": false, 00:09:40.893 "nvme_io": false, 00:09:40.893 "nvme_io_md": false, 00:09:40.893 "write_zeroes": true, 00:09:40.893 "zcopy": false, 00:09:40.893 "get_zone_info": false, 00:09:40.893 "zone_management": false, 00:09:40.893 "zone_append": false, 00:09:40.893 "compare": true, 00:09:40.893 "compare_and_write": false, 00:09:40.893 "abort": true, 00:09:40.893 "seek_hole": false, 00:09:40.893 "seek_data": false, 00:09:40.893 "copy": true, 00:09:40.893 "nvme_iov_md": false 00:09:40.893 }, 00:09:40.893 "driver_specific": { 00:09:40.893 "gpt": { 00:09:40.893 "base_bdev": "Nvme0n1", 00:09:40.893 "offset_blocks": 256, 00:09:40.893 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:40.893 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:40.893 "partition_name": "SPDK_TEST_first" 00:09:40.893 } 00:09:40.893 } 00:09:40.893 } 00:09:40.893 ]' 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:40.893 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:41.152 { 00:09:41.152 "name": "Nvme0n1p2", 00:09:41.152 "aliases": [ 00:09:41.152 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:41.152 ], 00:09:41.152 "product_name": "GPT Disk", 00:09:41.152 "block_size": 4096, 00:09:41.152 "num_blocks": 774143, 00:09:41.152 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:41.152 "md_size": 64, 00:09:41.152 "md_interleave": false, 00:09:41.152 "dif_type": 0, 00:09:41.152 "assigned_rate_limits": { 00:09:41.152 "rw_ios_per_sec": 0, 00:09:41.152 "rw_mbytes_per_sec": 0, 00:09:41.152 "r_mbytes_per_sec": 0, 00:09:41.152 "w_mbytes_per_sec": 0 00:09:41.152 }, 00:09:41.152 "claimed": false, 00:09:41.152 "zoned": false, 00:09:41.152 "supported_io_types": { 00:09:41.152 "read": true, 00:09:41.152 "write": true, 00:09:41.152 "unmap": true, 00:09:41.152 "flush": true, 00:09:41.152 "reset": true, 00:09:41.152 "nvme_admin": false, 00:09:41.152 "nvme_io": false, 00:09:41.152 "nvme_io_md": false, 00:09:41.152 "write_zeroes": true, 00:09:41.152 "zcopy": false, 00:09:41.152 "get_zone_info": false, 00:09:41.152 "zone_management": false, 00:09:41.152 "zone_append": false, 00:09:41.152 "compare": true, 00:09:41.152 "compare_and_write": false, 00:09:41.152 "abort": true, 00:09:41.152 "seek_hole": false, 00:09:41.152 "seek_data": false, 00:09:41.152 "copy": true, 00:09:41.152 "nvme_iov_md": false 00:09:41.152 }, 00:09:41.152 "driver_specific": { 00:09:41.152 "gpt": { 00:09:41.152 "base_bdev": "Nvme0n1", 00:09:41.152 "offset_blocks": 774400, 00:09:41.152 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:41.152 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:41.152 "partition_name": "SPDK_TEST_second" 00:09:41.152 } 00:09:41.152 } 00:09:41.152 } 00:09:41.152 ]' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 81417 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 81417 ']' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 81417 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81417 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.152 killing process with pid 81417 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81417' 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 81417 00:09:41.152 17:15:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 81417 00:09:41.717 ************************************ 00:09:41.717 END TEST bdev_gpt_uuid 00:09:41.717 ************************************ 00:09:41.717 00:09:41.717 real 0m2.434s 00:09:41.717 user 0m2.610s 00:09:41.717 sys 0m0.627s 00:09:41.717 17:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.717 17:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:41.975 17:15:52 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:41.975 17:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.491 Waiting for block devices as requested 00:09:42.491 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.491 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.748 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.748 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.044 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.044 17:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:48.044 17:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:48.044 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:48.044 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:48.044 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:48.044 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:48.044 17:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:48.044 00:09:48.044 real 0m55.720s 00:09:48.044 user 1m10.596s 00:09:48.044 sys 0m10.799s 00:09:48.044 17:15:58 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.044 ************************************ 00:09:48.044 17:15:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.044 END TEST blockdev_nvme_gpt 00:09:48.044 ************************************ 00:09:48.044 17:15:58 -- common/autotest_common.sh@1142 -- # return 0 00:09:48.044 17:15:58 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:48.044 17:15:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.044 17:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.044 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:09:48.044 ************************************ 00:09:48.044 START TEST nvme 00:09:48.044 ************************************ 00:09:48.044 17:15:58 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:48.348 * Looking for test storage... 00:09:48.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.348 17:15:58 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.208 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.466 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.467 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.467 17:16:00 nvme -- nvme/nvme.sh@79 -- # uname 00:09:49.467 17:16:00 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:49.467 17:16:00 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:49.467 17:16:00 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1069 -- # stubpid=82036 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:09:49.467 Waiting for stub to ready for secondary processes... 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/82036 ]] 00:09:49.467 17:16:00 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:49.467 [2024-07-15 17:16:00.257096] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:09:49.467 [2024-07-15 17:16:00.257311] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:50.400 17:16:01 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:50.400 17:16:01 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/82036 ]] 00:09:50.400 17:16:01 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:50.965 [2024-07-15 17:16:01.534611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:50.965 [2024-07-15 17:16:01.556955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.965 [2024-07-15 17:16:01.631570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.965 [2024-07-15 17:16:01.631598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.965 [2024-07-15 17:16:01.631663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.965 [2024-07-15 17:16:01.645857] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:50.965 [2024-07-15 17:16:01.645934] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.965 [2024-07-15 17:16:01.655231] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:50.965 [2024-07-15 17:16:01.655457] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:50.965 [2024-07-15 17:16:01.657101] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.965 [2024-07-15 17:16:01.657804] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:50.965 [2024-07-15 17:16:01.658002] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:50.965 [2024-07-15 17:16:01.658960] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.965 [2024-07-15 17:16:01.659545] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:50.965 [2024-07-15 17:16:01.659982] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:50.965 [2024-07-15 17:16:01.661270] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.965 [2024-07-15 17:16:01.661639] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:50.965 [2024-07-15 17:16:01.661785] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:50.965 [2024-07-15 17:16:01.661921] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:50.965 [2024-07-15 17:16:01.662067] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:51.529 17:16:02 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:51.529 done. 00:09:51.529 17:16:02 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:09:51.529 17:16:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.529 17:16:02 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:51.529 17:16:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.529 17:16:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.529 ************************************ 00:09:51.529 START TEST nvme_reset 00:09:51.529 ************************************ 00:09:51.529 17:16:02 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.786 Initializing NVMe Controllers 00:09:51.786 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:51.786 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:51.786 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:51.786 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:51.787 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:51.787 00:09:51.787 real 0m0.305s 00:09:51.787 user 0m0.093s 00:09:51.787 sys 0m0.160s 00:09:51.787 17:16:02 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.787 17:16:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 ************************************ 00:09:51.787 END TEST nvme_reset 00:09:51.787 ************************************ 00:09:51.787 17:16:02 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:51.787 17:16:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:51.787 17:16:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:51.787 17:16:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.787 17:16:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 ************************************ 00:09:51.787 START TEST nvme_identify 00:09:51.787 ************************************ 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:09:51.787 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:51.787 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:51.787 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.787 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:51.787 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.044 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:52.044 17:16:02 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.044 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:52.304 ===================================================== 00:09:52.304 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.304 ===================================================== 00:09:52.304 Controller Capabilities/Features 00:09:52.304 ================================ 00:09:52.304 Vendor ID: 1b36 00:09:52.304 Subsystem Vendor ID: 1af4 00:09:52.304 Serial Number: 12340 00:09:52.304 Model Number: QEMU NVMe Ctrl 00:09:52.304 Firmware Version: 8.0.0 00:09:52.304 Recommended Arb Burst: 6 00:09:52.304 IEEE OUI Identifier: 00 54 52 00:09:52.304 Multi-path I/O 00:09:52.304 May have multiple subsystem ports: No 00:09:52.304 May have multiple controllers: No 00:09:52.304 Associated with SR-IOV VF: No 00:09:52.304 Max Data Transfer Size: 524288 00:09:52.304 Max Number of Namespaces: 256 00:09:52.304 Max Number of I/O Queues: 64 00:09:52.304 NVMe Specification Version (VS): 1.4 00:09:52.304 NVMe Specification Version (Identify): 1.4 00:09:52.304 Maximum Queue Entries: 2048 00:09:52.304 Contiguous Queues Required: Yes 00:09:52.304 Arbitration Mechanisms Supported 00:09:52.304 Weighted Round Robin: Not Supported 00:09:52.304 Vendor Specific: Not Supported 00:09:52.304 Reset Timeout: 7500 ms 00:09:52.304 Doorbell Stride: 4 bytes 00:09:52.304 NVM Subsystem Reset: Not Supported 00:09:52.304 Command Sets Supported 00:09:52.304 NVM Command Set: Supported 00:09:52.304 Boot Partition: Not Supported 00:09:52.304 Memory Page Size Minimum: 4096 bytes 00:09:52.304 Memory Page Size Maximum: 65536 bytes 00:09:52.304 Persistent Memory Region: Not Supported 00:09:52.305 Optional Asynchronous Events Supported 00:09:52.305 Namespace Attribute Notices: Supported 00:09:52.305 Firmware Activation Notices: Not Supported 00:09:52.305 ANA Change Notices: Not Supported 00:09:52.305 PLE Aggregate Log Change Notices: Not Supported 00:09:52.305 LBA Status Info Alert Notices: Not Supported 00:09:52.305 EGE Aggregate Log Change Notices: Not Supported 00:09:52.305 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.305 Zone Descriptor Change Notices: Not Supported 00:09:52.305 Discovery Log Change Notices: Not Supported 00:09:52.305 Controller Attributes 00:09:52.305 128-bit Host Identifier: Not Supported 00:09:52.305 Non-Operational Permissive Mode: Not Supported 00:09:52.305 NVM Sets: Not Supported 00:09:52.305 Read Recovery Levels: Not Supported 00:09:52.305 Endurance Groups: Not Supported 00:09:52.305 Predictable Latency Mode: Not Supported 00:09:52.305 Traffic Based Keep ALive: Not Supported 00:09:52.305 Namespace Granularity: Not Supported 00:09:52.305 SQ Associations: Not Supported 00:09:52.305 UUID List: Not Supported 00:09:52.305 Multi-Domain Subsystem: Not Supported 00:09:52.305 Fixed Capacity Management: Not Supported 00:09:52.305 Variable Capacity Management: Not Supported 00:09:52.305 Delete Endurance Group: Not Supported 00:09:52.305 Delete NVM Set: Not Supported 00:09:52.305 Extended LBA Formats Supported: Supported 00:09:52.305 Flexible Data Placement Supported: Not Supported 00:09:52.305 00:09:52.305 Controller Memory Buffer Support 00:09:52.305 ================================ 00:09:52.305 Supported: No 00:09:52.305 00:09:52.305 Persistent Memory Region Support 00:09:52.305 ================================ 00:09:52.305 Supported: No 00:09:52.305 00:09:52.305 Admin Command Set Attributes 00:09:52.305 ============================ 00:09:52.305 Security Send/Receive: Not Supported 00:09:52.305 Format NVM: Supported 00:09:52.305 Firmware Activate/Download: Not Supported 00:09:52.305 Namespace Management: Supported 00:09:52.305 Device Self-Test: Not Supported 00:09:52.305 Directives: Supported 00:09:52.305 NVMe-MI: Not Supported 00:09:52.305 Virtualization Management: Not Supported 00:09:52.305 Doorbell Buffer Config: Supported 00:09:52.305 Get LBA Status Capability: Not Supported 00:09:52.305 Command & Feature Lockdown Capability: Not Supported 00:09:52.305 Abort Command Limit: 4 00:09:52.305 Async Event Request Limit: 4 00:09:52.305 Number of Firmware Slots: N/A 00:09:52.305 Firmware Slot 1 Read-Only: N/A 00:09:52.305 Firmware Activation Without Reset: N/A 00:09:52.305 Multiple Update Detection Support: N/A 00:09:52.305 Firmware Update Granularity: No Information Provided 00:09:52.305 Per-Namespace SMART Log: Yes 00:09:52.305 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.305 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.305 Command Effects Log Page: Supported 00:09:52.305 Get Log Page Extended Data: Supported 00:09:52.305 Telemetry Log Pages: Not Supported 00:09:52.305 Persistent Event Log Pages: Not Supported 00:09:52.305 Supported Log Pages Log Page: May Support 00:09:52.305 Commands Supported & Effects Log Page: Not Supported 00:09:52.305 Feature Identifiers & Effects Log Page:May Support 00:09:52.305 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.305 Data Area 4 for Telemetry Log: Not Supported 00:09:52.305 Error Log Page Entries Supported: 1 00:09:52.305 Keep Alive: Not Supported 00:09:52.305 00:09:52.305 NVM Command Set Attributes 00:09:52.305 ========================== 00:09:52.305 Submission Queue Entry Size 00:09:52.305 Max: 64 00:09:52.305 Min: 64 00:09:52.305 Completion Queue Entry Size 00:09:52.305 Max: 16 00:09:52.305 Min: 16 00:09:52.305 Number of Namespaces: 256 00:09:52.305 Compare Command: Supported 00:09:52.305 Write Uncorrectable Command: Not Supported 00:09:52.305 Dataset Management Command: Supported 00:09:52.305 Write Zeroes Command: Supported 00:09:52.305 Set Features Save Field: Supported 00:09:52.305 Reservations: Not Supported 00:09:52.305 Timestamp: Supported 00:09:52.305 Copy: Supported 00:09:52.305 Volatile Write Cache: Present 00:09:52.305 Atomic Write Unit (Normal): 1 00:09:52.305 Atomic Write Unit (PFail): 1 00:09:52.305 Atomic Compare & Write Unit: 1 00:09:52.305 Fused Compare & Write: Not Supported 00:09:52.305 Scatter-Gather List 00:09:52.305 SGL Command Set: Supported 00:09:52.305 SGL Keyed: Not Supported 00:09:52.305 SGL Bit Bucket Descriptor: Not Supported 00:09:52.305 SGL Metadata Pointer: Not Supported 00:09:52.305 Oversized SGL: Not Supported 00:09:52.305 SGL Metadata Address: Not Supported 00:09:52.305 SGL Offset: Not Supported 00:09:52.305 Transport SGL Data Block: Not Supported 00:09:52.305 Replay Protected Memory Block: Not Supported 00:09:52.305 00:09:52.305 Firmware Slot Information 00:09:52.305 ========================= 00:09:52.305 Active slot: 1 00:09:52.305 Slot 1 Firmware Revision: 1.0 00:09:52.305 00:09:52.305 00:09:52.305 Commands Supported and Effects 00:09:52.305 ============================== 00:09:52.305 Admin Commands 00:09:52.305 -------------- 00:09:52.305 Delete I/O Submission Queue (00h): Supported 00:09:52.305 Create I/O Submission Queue (01h): Supported 00:09:52.305 Get Log Page (02h): Supported 00:09:52.305 Delete I/O Completion Queue (04h): Supported 00:09:52.305 Create I/O Completion Queue (05h): Supported 00:09:52.305 Identify (06h): Supported 00:09:52.305 Abort (08h): Supported 00:09:52.305 Set Features (09h): Supported 00:09:52.305 Get Features (0Ah): Supported 00:09:52.305 Asynchronous Event Request (0Ch): Supported 00:09:52.305 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.305 Directive Send (19h): Supported 00:09:52.305 Directive Receive (1Ah): Supported 00:09:52.305 Virtualization Management (1Ch): Supported 00:09:52.305 Doorbell Buffer Config (7Ch): Supported 00:09:52.305 Format NVM (80h): Supported LBA-Change 00:09:52.305 I/O Commands 00:09:52.305 ------------ 00:09:52.305 Flush (00h): Supported LBA-Change 00:09:52.305 Write (01h): Supported LBA-Change 00:09:52.305 Read (02h): Supported 00:09:52.305 Compare (05h): Supported 00:09:52.305 Write Zeroes (08h): Supported LBA-Change 00:09:52.305 Dataset Management (09h): Supported LBA-Change 00:09:52.305 Unknown (0Ch): Supported 00:09:52.305 Unknown (12h): Supported 00:09:52.305 Copy (19h): Supported LBA-Change 00:09:52.305 Unknown (1Dh): Supported LBA-Change 00:09:52.305 00:09:52.305 Error Log 00:09:52.305 ========= 00:09:52.305 00:09:52.305 Arbitration 00:09:52.305 =========== 00:09:52.305 Arbitration Burst: no limit 00:09:52.305 00:09:52.305 Power Management 00:09:52.305 ================ 00:09:52.305 Number of Power States: 1 00:09:52.305 Current Power State: Power State #0 00:09:52.305 Power State #0: 00:09:52.305 Max Power: 25.00 W 00:09:52.305 Non-Operational State: Operational 00:09:52.305 Entry Latency: 16 microseconds 00:09:52.305 Exit Latency: 4 microseconds 00:09:52.305 Relative Read Throughput: 0 00:09:52.305 Relative Read Latency: 0 00:09:52.305 Relative Write Throughput: 0 00:09:52.305 Relative Write Latency: 0 00:09:52.305 Idle Power: Not Reported 00:09:52.305 Active Power: Not Reported 00:09:52.305 Non-Operational Permissive Mode: Not Supported 00:09:52.305 00:09:52.305 Health Information 00:09:52.305 ================== 00:09:52.305 Critical Warnings: 00:09:52.305 Available Spare Space: OK 00:09:52.305 Temperature: OK 00:09:52.305 Device Reliability: OK 00:09:52.305 Read Only: No 00:09:52.305 Volatile Memory Backup: OK 00:09:52.305 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.305 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.305 Available Spare: 0% 00:09:52.305 Available Spare Threshold: 0% 00:09:52.305 Life Percentage Used: 0% 00:09:52.305 Data Units Read: 1041 00:09:52.305 Data Units Written: 874 00:09:52.305 Host Read Commands: 49548 00:09:52.305 Host Write Commands: 48056 00:09:52.305 Controller Busy Time: 0 minutes 00:09:52.305 Power Cycles: 0 00:09:52.305 Power On Hours: 0 hours 00:09:52.306 Unsafe Shutdowns: 0 00:09:52.306 Unrecoverable Media Errors: 0 00:09:52.306 Lifetime Error Log Entries: 0 00:09:52.306 Warning Temperature Time: 0 minutes 00:09:52.306 Critical Temperature Time: 0 minutes 00:09:52.306 00:09:52.306 Number of Queues 00:09:52.306 ================ 00:09:52.306 Number of I/O Submission Queues: 64 00:09:52.306 Number of I/O Completion Queues: 64 00:09:52.306 00:09:52.306 ZNS Specific Controller Data 00:09:52.306 ============================ 00:09:52.306 Zone Append Size Limit: 0 00:09:52.306 00:09:52.306 00:09:52.306 Active Namespaces 00:09:52.306 ================= 00:09:52.306 Namespace ID:1 00:09:52.306 Error Recovery Timeout: Unlimited 00:09:52.306 Command Set Identifier: NVM (00h) 00:09:52.306 Deallocate: Supported 00:09:52.306 Deallocated/Unwritten Error: Supported 00:09:52.306 Deallocated Read Value: All 0x00 00:09:52.306 Deallocate in Write Zeroes: Not Supported 00:09:52.306 Deallocated Guard Field: 0xFFFF 00:09:52.306 Flush: Supported 00:09:52.306 Reservation: Not Supported 00:09:52.306 Metadata Transferred as: Separate Metadata Buffer 00:09:52.306 Namespace Sharing Capabilities: Private 00:09:52.306 Size (in LBAs): 1548666 (5GiB) 00:09:52.306 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.306 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.306 Thin Provisioning: Not Supported 00:09:52.306 Per-NS Atomic Units: No 00:09:52.306 Maximum Single Source Range Length: 128 00:09:52.306 Maximum Copy Length: 128 00:09:52.306 Maximum Source Range Count: 128 00:09:52.306 NGUID/EUI64 Never Reused: No 00:09:52.306 Namespace Write Protected: No 00:09:52.306 Number of LBA Formats: 8 00:09:52.306 Current LBA Format: LBA Format #07 00:09:52.306 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.306 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.306 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.306 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.306 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.306 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.306 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.306 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.306 00:09:52.306 NVM Specific Namespace Data 00:09:52.306 =========================== 00:09:52.306 Logical Block Storage Tag Mask: 0 00:09:52.306 Protection Information Capabilities: 00:09:52.306 16b Guard Protection Information Storage Tag Support: No 00:09:52.306 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.306 Storage Tag Check Read Support: No 00:09:52.306 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.306 ===================================================== 00:09:52.306 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.306 ===================================================== 00:09:52.306 Controller Capabilities/Features 00:09:52.306 ================================ 00:09:52.306 Vendor ID: 1b36 00:09:52.306 Subsystem Vendor ID: 1af4 00:09:52.306 Serial Number: 12341 00:09:52.306 Model Number: QEMU NVMe Ctrl 00:09:52.306 Firmware Version: 8.0.0 00:09:52.306 Recommended Arb Burst: 6 00:09:52.306 IEEE OUI Identifier: 00 54 52 00:09:52.306 Multi-path I/O 00:09:52.306 May have multiple subsystem ports: No 00:09:52.306 May have multiple controllers: No 00:09:52.306 Associated with SR-IOV VF: No 00:09:52.306 Max Data Transfer Size: 524288 00:09:52.306 Max Number of Namespaces: 256 00:09:52.306 Max Number of I/O Queues: 64 00:09:52.306 NVMe Specification Version (VS): 1.4 00:09:52.306 NVMe Specification Version (Identify): 1.4 00:09:52.306 Maximum Queue Entries: 2048 00:09:52.306 Contiguous Queues Required: Yes 00:09:52.306 Arbitration Mechanisms Supported 00:09:52.306 Weighted Round Robin: Not Supported 00:09:52.306 Vendor Specific: Not Supported 00:09:52.306 Reset Timeout: 7500 ms 00:09:52.306 Doorbell Stride: 4 bytes 00:09:52.306 NVM Subsystem Reset: Not Supported 00:09:52.306 Command Sets Supported 00:09:52.306 NVM Command Set: Supported 00:09:52.306 Boot Partition: Not Supported 00:09:52.306 Memory Page Size Minimum: 4096 bytes 00:09:52.306 Memory Page Size Maximum: 65536 bytes 00:09:52.306 Persistent Memory Region: Not Supported 00:09:52.306 Optional Asynchronous Events Supported 00:09:52.306 Namespace Attribute Notices: Supported 00:09:52.306 Firmware Activation Notices: Not Supported 00:09:52.306 ANA Change Notices: Not Supported 00:09:52.306 PLE Aggregate Log Change Notices: Not Supported 00:09:52.306 LBA Status Info Alert Notices: Not Supported 00:09:52.306 EGE Aggregate Log Change Notices: Not Supported 00:09:52.306 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.306 Zone Descriptor Change Notices: Not Supported 00:09:52.306 Discovery Log Change Notices: Not Supported 00:09:52.306 Controller Attributes 00:09:52.306 128-bit Host Identifier: Not Supported 00:09:52.306 Non-Operational Permissive Mode: Not Supported 00:09:52.306 NVM Sets: Not Supported 00:09:52.306 Read Recovery Levels: Not Supported 00:09:52.306 Endurance Groups: Not Supported 00:09:52.306 Predictable Latency Mode: Not Supported 00:09:52.306 Traffic Based Keep ALive: Not Supported 00:09:52.306 Namespace Granularity: Not Supported 00:09:52.306 SQ Associations: Not Supported 00:09:52.306 UUID List: Not Supported 00:09:52.306 Multi-Domain Subsystem: Not Supported 00:09:52.306 Fixed Capacity Management: Not Supported 00:09:52.306 Variable Capacity Management: Not Supported 00:09:52.306 Delete Endurance Group: Not Supported 00:09:52.306 Delete NVM Set: Not Supported 00:09:52.306 Extended LBA Formats Supported: Supported 00:09:52.306 Flexible Data Placement Supported: Not Supported 00:09:52.306 00:09:52.306 Controller Memory Buffer Support 00:09:52.306 ================================ 00:09:52.306 Supported: No 00:09:52.306 00:09:52.306 Persistent Memory Region Support 00:09:52.306 ================================ 00:09:52.306 Supported: No 00:09:52.306 00:09:52.306 Admin Command Set Attributes 00:09:52.306 ============================ 00:09:52.306 Security Send/Receive: Not Supported 00:09:52.306 Format NVM: Supported 00:09:52.306 Firmware Activate/Download: Not Supported 00:09:52.306 Namespace Management: Supported 00:09:52.306 Device Self-Test: Not Supported 00:09:52.306 Directives: Supported 00:09:52.306 NVMe-MI: Not Supported 00:09:52.306 Virtualization Management: Not Supported 00:09:52.306 Doorbell Buffer Config: Supported 00:09:52.306 Get LBA Status Capability: Not Supported 00:09:52.306 Command & Feature Lockdown Capability: Not Supported 00:09:52.306 Abort Command Limit: 4 00:09:52.306 Async Event Request Limit: 4 00:09:52.306 Number of Firmware Slots: N/A 00:09:52.306 Firmware Slot 1 Read-Only: N/A 00:09:52.306 Firmware Activation Without Reset: N/A 00:09:52.306 Multiple Update Detection Support: N/A 00:09:52.306 Firmware Update Granularity: No Information Provided 00:09:52.306 Per-Namespace SMART Log: Yes 00:09:52.306 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.306 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.306 Command Effects Log Page: Supported 00:09:52.306 Get Log Page Extended Data: Supported 00:09:52.306 Telemetry Log Pages: Not Supported 00:09:52.306 Persistent Event Log Pages: Not Supported 00:09:52.306 Supported Log Pages Log Page: May Support 00:09:52.306 Commands Supported & Effects Log Page: Not Supported 00:09:52.306 Feature Identifiers & Effects Log Page:May Support 00:09:52.306 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.306 Data Area 4 for Telemetry Log: Not Supported 00:09:52.306 Error Log Page Entries Supported: 1 00:09:52.306 Keep Alive: Not Supported 00:09:52.306 00:09:52.306 NVM Command Set Attributes 00:09:52.306 ========================== 00:09:52.306 Submission Queue Entry Size 00:09:52.306 Max: 64 00:09:52.306 Min: 64 00:09:52.306 Completion Queue Entry Size 00:09:52.306 Max: 16 00:09:52.306 Min: 16 00:09:52.307 Number of Namespaces: 256 00:09:52.307 Compare Command: Supported 00:09:52.307 Write Uncorrectable Command: Not Supported 00:09:52.307 Dataset Management Command: Supported 00:09:52.307 Write Zeroes Command: Supported 00:09:52.307 Set Features Save Field: Supported 00:09:52.307 Reservations: Not Supported 00:09:52.307 Timestamp: Supported 00:09:52.307 Copy: Supported 00:09:52.307 Volatile Write Cache: Present 00:09:52.307 Atomic Write Unit (Normal): 1 00:09:52.307 Atomic Write Unit (PFail): 1 00:09:52.307 Atomic Compare & Write Unit: 1 00:09:52.307 Fused Compare & Write: Not Supported 00:09:52.307 Scatter-Gather List 00:09:52.307 SGL Command Set: Supported 00:09:52.307 SGL Keyed: Not Supported 00:09:52.307 SGL Bit Bucket Descriptor: Not Supported 00:09:52.307 SGL Metadata Pointer: Not Supported 00:09:52.307 Oversized SGL: Not Supported 00:09:52.307 SGL Metadata Address: Not Supported 00:09:52.307 SGL Offset: Not Supported 00:09:52.307 Transport SGL Data Block: Not Supported 00:09:52.307 Replay Protected Memory Block: Not Supported 00:09:52.307 00:09:52.307 Firmware Slot Information 00:09:52.307 ========================= 00:09:52.307 Active slot: 1 00:09:52.307 Slot 1 Firmware Revision: 1.0 00:09:52.307 00:09:52.307 00:09:52.307 Commands Supported and Effects 00:09:52.307 ============================== 00:09:52.307 Admin Commands 00:09:52.307 -------------- 00:09:52.307 Delete I/O Submission Queue (00h): Supported 00:09:52.307 Create I/O Submission Queue (01h): Supported 00:09:52.307 Get Log Page (02h): Supported 00:09:52.307 Delete I/O Completion Queue (04h): Supported 00:09:52.307 Create I/O Completion Queue (05h): Supported 00:09:52.307 Identify (06h): Supported 00:09:52.307 Abort (08h): Supported 00:09:52.307 Set Features (09h): Supported 00:09:52.307 Get Features (0Ah): Supported 00:09:52.307 Asynchronous Event Request (0Ch): Supported 00:09:52.307 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.307 Directive Send (19h): Supported 00:09:52.307 Directive Receive (1Ah): Supported 00:09:52.307 Virtualization Management (1Ch): Supported 00:09:52.307 Doorbell Buffer Config (7Ch): Supported 00:09:52.307 Format NVM (80h): Supported LBA-Change 00:09:52.307 I/O Commands 00:09:52.307 ------------ 00:09:52.307 Flush (00h): Supported LBA-Change 00:09:52.307 Write (01h): Supported LBA-Change 00:09:52.307 Read (02h): Supported 00:09:52.307 Compare (05h): Supported 00:09:52.307 Write Zeroes (08h): Supported LBA-Change 00:09:52.307 Dataset Management (09h): Supported LBA-Change 00:09:52.307 Unknown (0Ch): Supported 00:09:52.307 Unknown (12h): Supported 00:09:52.307 Copy (19h): Supported LBA-Change 00:09:52.307 Unknown (1Dh): Supported LBA-Change 00:09:52.307 00:09:52.307 Error Log 00:09:52.307 ========= 00:09:52.307 00:09:52.307 Arbitration 00:09:52.307 =========== 00:09:52.307 Arbitration Burst: no limit 00:09:52.307 00:09:52.307 Power Management 00:09:52.307 ================ 00:09:52.307 Number of Power States: 1 00:09:52.307 Current Power State: Power State #0 00:09:52.307 Power State #0: 00:09:52.307 Max Power: 25.00 W 00:09:52.307 Non-Operational State: Operational 00:09:52.307 Entry Latency: 16 microseconds 00:09:52.307 Exit Latency: 4 microseconds 00:09:52.307 Relative Read Throughput: 0 00:09:52.307 Relative Read Latency: 0 00:09:52.307 Relative Write Throughput: 0 00:09:52.307 Relative Write Latency: 0 00:09:52.307 Idle Power: Not Reported 00:09:52.307 Active Power: Not Reported 00:09:52.307 Non-Operational Permissive Mode: Not Supported 00:09:52.307 00:09:52.307 Health Information 00:09:52.307 ================== 00:09:52.307 Critical Warnings: 00:09:52.307 Available Spare Space: OK 00:09:52.307 Temperature: OK 00:09:52.307 Device Reliability: OK 00:09:52.307 Read Only: No 00:09:52.307 Volatile Memory Backup: OK 00:09:52.307 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.307 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.307 Available Spare: 0% 00:09:52.307 Available Spare Threshold: 0% 00:09:52.307 Life Percentage Used: 0% 00:09:52.307 Data Units Read: 754 00:09:52.307 Data Units Written: 601 00:09:52.307 Host Read Commands: 35205 00:09:52.307 Host Write Commands: 32886 00:09:52.307 Controller Busy Time: 0 minutes 00:09:52.307 Power Cycles: 0 00:09:52.307 Power On Hours: 0 hours 00:09:52.307 Unsafe Shutdowns: 0 00:09:52.307 Unrecoverable Media Errors: 0 00:09:52.307 Lifetime Error Log Entries: 0 00:09:52.307 Warning Temperature Time: 0 minutes 00:09:52.307 Critical Temperature Time: 0 minutes 00:09:52.307 00:09:52.307 Number of Queues 00:09:52.307 ================ 00:09:52.307 Number of I/O Submission Queues: 64 00:09:52.307 Number of I/O Completion Queues: 64 00:09:52.307 00:09:52.307 ZNS Specific Controller Data 00:09:52.307 ============================ 00:09:52.307 Zone Append Size Limit: 0 00:09:52.307 00:09:52.307 00:09:52.307 Active Namespaces 00:09:52.307 ================= 00:09:52.307 Namespace ID:1 00:09:52.307 Error Recovery Timeout: Unlimited 00:09:52.307 Command Set Identifier: NVM (00h) 00:09:52.307 Deallocate: Supported 00:09:52.307 Deallocated/Unwritten Error: Supported 00:09:52.307 Deallocated Read Value: All 0x00 00:09:52.307 Deallocate in Write Zeroes: Not Supported 00:09:52.307 Deallocated Guard Field: 0xFFFF 00:09:52.307 Flush: Supported 00:09:52.307 Reservation: Not Supported 00:09:52.307 Namespace Sharing Capabilities: Private 00:09:52.307 Size (in LBAs): 1310720 (5GiB) 00:09:52.307 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.307 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.307 Thin Provisioning: Not Supported 00:09:52.307 Per-NS Atomic Units: No 00:09:52.307 Maximum Single Source Range Length: 128 00:09:52.307 Maximum Copy Length: 128 00:09:52.307 Maximum Source Range Count: 128 00:09:52.307 NGUID/EUI64 Never Reused: No 00:09:52.307 Namespace Write Protected: No 00:09:52.307 Number of LBA Formats: 8 00:09:52.307 Current LBA Format: LBA Format #04 00:09:52.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.307 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.307 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.307 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.307 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.307 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.307 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.307 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.307 00:09:52.307 NVM Specific Namespace Data 00:09:52.307 =========================== 00:09:52.307 Logical Block Storage Tag Mask: 0 00:09:52.307 Protection Information Capabilities: 00:09:52.307 16b Guard Protection Information Storage Tag Support: No 00:09:52.307 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.307 Storage Tag Check Read Support: No 00:09:52.307 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.307 ===================================================== 00:09:52.307 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.307 ===================================================== 00:09:52.307 Controller Capabilities/Features 00:09:52.307 ================================ 00:09:52.307 Vendor ID: 1b36 00:09:52.307 Subsystem Vendor ID: 1af4 00:09:52.307 Serial Number: 12343 00:09:52.307 Model Number: QEMU NVMe Ctrl 00:09:52.307 Firmware Version: 8.0.0 00:09:52.307 Recommended Arb Burst: 6 00:09:52.307 IEEE OUI Identifier: 00 54 52 00:09:52.307 Multi-path I/O 00:09:52.307 May have multiple subsystem ports: No 00:09:52.307 May have multiple controllers: Yes 00:09:52.307 Associated with SR-IOV VF: No 00:09:52.307 Max Data Transfer Size: 524288 00:09:52.307 Max Number of Namespaces: 256 00:09:52.307 Max Number of I/O Queues: 64 00:09:52.307 NVMe Specification Version (VS): 1.4 00:09:52.307 NVMe Specification Version (Identify): 1.4 00:09:52.307 Maximum Queue Entries: 2048 00:09:52.308 Contiguous Queues Required: Yes 00:09:52.308 Arbitration Mechanisms Supported 00:09:52.308 Weighted Round Robin: Not Supported 00:09:52.308 Vendor Specific: Not Supported 00:09:52.308 Reset Timeout: 7500 ms 00:09:52.308 Doorbell Stride: 4 bytes 00:09:52.308 NVM Subsystem Reset: Not Supported 00:09:52.308 Command Sets Supported 00:09:52.308 NVM Command Set: Supported 00:09:52.308 Boot Partition: Not Supported 00:09:52.308 Memory Page Size Minimum: 4096 bytes 00:09:52.308 Memory Page Size Maximum: 65536 bytes 00:09:52.308 Persistent Memory Region: Not Supported 00:09:52.308 Optional Asynchronous Events Supported 00:09:52.308 Namespace Attribute Notices: Supported 00:09:52.308 Firmware Activation Notices: Not Supported 00:09:52.308 ANA Change Notices: Not Supported 00:09:52.308 PLE Aggregate Log Change Notices: Not Supported 00:09:52.308 LBA Status Info Alert Notices: Not Supported 00:09:52.308 EGE Aggregate Log Change Notices: Not Supported 00:09:52.308 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.308 Zone Descriptor Change Notices: Not Supported 00:09:52.308 Discovery Log Change Notices: Not Supported 00:09:52.308 Controller Attributes 00:09:52.308 128-bit Host Identifier: Not Supported 00:09:52.308 Non-Operational Permissive Mode: Not Supported 00:09:52.308 NVM Sets: Not Supported 00:09:52.308 Read Recovery Levels: Not Supported 00:09:52.308 Endurance Groups: Supported 00:09:52.308 Predictable Latency Mode: Not Supported 00:09:52.308 Traffic Based Keep ALive: Not Supported 00:09:52.308 Namespace Granularity: Not Supported 00:09:52.308 SQ Associations: Not Supported 00:09:52.308 UUID List: Not Supported 00:09:52.308 Multi-Domain Subsystem: Not Supported 00:09:52.308 Fixed Capacity Management: Not Supported 00:09:52.308 Variable Capacity Management: Not Supported 00:09:52.308 Delete Endurance Group: Not Supported 00:09:52.308 Delete NVM Set: Not Supported 00:09:52.308 Extended LBA Formats Supported: Supported 00:09:52.308 Flexible Data Placement Supported: Supported 00:09:52.308 00:09:52.308 Controller Memory Buffer Support 00:09:52.308 ================================ 00:09:52.308 Supported: No 00:09:52.308 00:09:52.308 Persistent Memory Region Support 00:09:52.308 ================================ 00:09:52.308 Supported: No 00:09:52.308 00:09:52.308 Admin Command Set Attributes 00:09:52.308 ============================ 00:09:52.308 Security Send/Receive: Not Supported 00:09:52.308 Format NVM: Supported 00:09:52.308 Firmware Activate/Download: Not Supported 00:09:52.308 Namespace Management: Supported 00:09:52.308 Device Self-Test: Not Supported 00:09:52.308 Directives: Supported 00:09:52.308 NVMe-MI: Not Supported 00:09:52.308 Virtualization Management: Not Supported 00:09:52.308 Doorbell Buffer Config: Supported 00:09:52.308 Get LBA Status Capability: Not Supported 00:09:52.308 Command & Feature Lockdown Capability: Not Supported 00:09:52.308 Abort Command Limit: 4 00:09:52.308 Async Event Request Limit: 4 00:09:52.308 Number of Firmware Slots: N/A 00:09:52.308 Firmware Slot 1 Read-Only: N/A 00:09:52.308 Firmware Activation Without Reset: N/A 00:09:52.308 Multiple Update Detection Support: N/A 00:09:52.308 Firmware Update Granularity: No Information Provided 00:09:52.308 Per-Namespace SMART Log: Yes 00:09:52.308 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.308 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.308 Command Effects Log Page: Supported 00:09:52.308 Get Log Page Extended Data: Supported 00:09:52.308 Telemetry Log Pages: Not Supported 00:09:52.308 Persistent Event Log Pages: Not Supported 00:09:52.308 Supported Log Pages Log Page: May Support 00:09:52.308 Commands Supported & Effects Log Page: Not Supported 00:09:52.308 Feature Identifiers & Effects Log Page:May Support 00:09:52.308 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.308 Data Area 4 for Telemetry Log: Not Supported 00:09:52.308 Error Log Page Entries Supported: 1 00:09:52.308 Keep Alive: Not Supported 00:09:52.308 00:09:52.308 NVM Command Set Attributes 00:09:52.308 ========================== 00:09:52.308 Submission Queue Entry Size 00:09:52.308 Max: 64 00:09:52.308 Min: 64 00:09:52.308 Completion Queue Entry Size 00:09:52.308 Max: 16 00:09:52.308 Min: 16 00:09:52.308 Number of Namespaces: 256 00:09:52.308 Compare Command: Supported 00:09:52.308 Write Uncorrectable Command: Not Supported 00:09:52.308 Dataset Management Command: Supported 00:09:52.308 Write Zeroes Command: Supported 00:09:52.308 Set Features Save Field: Supported 00:09:52.308 Reservations: Not Supported 00:09:52.308 Timestamp: Supported 00:09:52.308 Copy: Supported 00:09:52.308 Volatile Write Cache: Present 00:09:52.308 Atomic Write Unit (Normal): 1 00:09:52.308 Atomic Write Unit (PFail): 1 00:09:52.308 Atomic Compare & Write Unit: 1 00:09:52.308 Fused Compare & Write: Not Supported 00:09:52.308 Scatter-Gather List 00:09:52.308 SGL Command Set: Supported 00:09:52.308 SGL Keyed: Not Supported 00:09:52.308 SGL Bit Bucket Descriptor: Not Supported 00:09:52.308 SGL Metadata Pointer: Not Supported 00:09:52.308 Oversized SGL: Not Supported 00:09:52.308 SGL Metadata Address: Not Supported 00:09:52.308 SGL Offset: Not Supported 00:09:52.308 Transport SGL Data Block: Not Supported 00:09:52.308 Replay Protected Memory Block: Not Supported 00:09:52.308 00:09:52.308 Firmware Slot Information 00:09:52.308 ========================= 00:09:52.308 Active slot: 1 00:09:52.308 Slot 1 Firmware Revision: 1.0 00:09:52.308 00:09:52.308 00:09:52.308 Commands Supported and Effects 00:09:52.308 ============================== 00:09:52.308 Admin Commands 00:09:52.308 -------------- 00:09:52.308 Delete I/O Submission Queue (00h): Supported 00:09:52.308 Create I/O Submission Queue (01h): Supported 00:09:52.308 Get Log Page (02h): Supported 00:09:52.308 Delete I/O Completion Queue (04h): Supported 00:09:52.308 Create I/O Completion Queue (05h): Supported 00:09:52.308 Identify (06h): Supported 00:09:52.308 Abort (08h): Supported 00:09:52.308 Set Features (09h): Supported 00:09:52.308 Get Features (0Ah): Supported 00:09:52.308 Asynchronous Event Request (0Ch): Supported 00:09:52.308 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.308 Directive Send (19h): Supported 00:09:52.308 Directive Receive (1Ah): Supported 00:09:52.308 Virtualization Management (1Ch): Supported 00:09:52.308 Doorbell Buffer Config (7Ch): Supported 00:09:52.308 Format NVM (80h): Supported LBA-Change 00:09:52.308 I/O Commands 00:09:52.308 ------------ 00:09:52.308 Flush (00h): Supported LBA-Change 00:09:52.308 Write (01h): Supported LBA-Change 00:09:52.308 Read (02h): Supported 00:09:52.308 Compare (05h): Supported 00:09:52.308 Write Zeroes (08h): Supported LBA-Change 00:09:52.308 Dataset Management (09h): Supported LBA-Change 00:09:52.308 Unknown (0Ch): Supported 00:09:52.308 Unknown (12h): Supported 00:09:52.308 Copy (19h): Supported LBA-Change 00:09:52.308 Unknown (1Dh): Supported LBA-Change 00:09:52.308 00:09:52.308 Error Log 00:09:52.308 ========= 00:09:52.308 00:09:52.308 Arbitration 00:09:52.308 =========== 00:09:52.308 Arbitration Burst: no limit 00:09:52.308 00:09:52.308 Power Management 00:09:52.308 ================ 00:09:52.308 Number of Power States: 1 00:09:52.308 Current Power State: Power State #0 00:09:52.308 Power State #0: 00:09:52.308 Max Power: 25.00 W 00:09:52.308 Non-Operational State: Operational 00:09:52.308 Entry Latency: 16 microseconds 00:09:52.308 Exit Latency: 4 microseconds 00:09:52.308 Relative Read Throughput: 0 00:09:52.308 Relative Read Latency: 0 00:09:52.308 Relative Write Throughput: 0 00:09:52.308 Relative Write Latency: 0 00:09:52.308 Idle Power: Not Reported 00:09:52.308 Active Power: Not Reported 00:09:52.308 Non-Operational Permissive Mode: Not Supported 00:09:52.308 00:09:52.308 Health Information 00:09:52.308 ================== 00:09:52.308 Critical Warnings: 00:09:52.308 Available Spare Space: OK 00:09:52.308 Temperature: OK 00:09:52.308 Device Reliability: OK 00:09:52.308 Read Only: No 00:09:52.308 Volatile Memory Backup: OK 00:09:52.308 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.309 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.309 Available Spare: 0% 00:09:52.309 Available Spare Threshold: 0% 00:09:52.309 Life Percentage Used: [2024-07-15 17:16:02.917709] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 82068 terminated unexpected 00:09:52.309 [2024-07-15 17:16:02.918975] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 82068 terminated unexpected 00:09:52.309 [2024-07-15 17:16:02.919847] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 82068 terminated unexpected 00:09:52.309 0% 00:09:52.309 Data Units Read: 797 00:09:52.309 Data Units Written: 690 00:09:52.309 Host Read Commands: 34996 00:09:52.309 Host Write Commands: 33586 00:09:52.309 Controller Busy Time: 0 minutes 00:09:52.309 Power Cycles: 0 00:09:52.309 Power On Hours: 0 hours 00:09:52.309 Unsafe Shutdowns: 0 00:09:52.309 Unrecoverable Media Errors: 0 00:09:52.309 Lifetime Error Log Entries: 0 00:09:52.309 Warning Temperature Time: 0 minutes 00:09:52.309 Critical Temperature Time: 0 minutes 00:09:52.309 00:09:52.309 Number of Queues 00:09:52.309 ================ 00:09:52.309 Number of I/O Submission Queues: 64 00:09:52.309 Number of I/O Completion Queues: 64 00:09:52.309 00:09:52.309 ZNS Specific Controller Data 00:09:52.309 ============================ 00:09:52.309 Zone Append Size Limit: 0 00:09:52.309 00:09:52.309 00:09:52.309 Active Namespaces 00:09:52.309 ================= 00:09:52.309 Namespace ID:1 00:09:52.309 Error Recovery Timeout: Unlimited 00:09:52.309 Command Set Identifier: NVM (00h) 00:09:52.309 Deallocate: Supported 00:09:52.309 Deallocated/Unwritten Error: Supported 00:09:52.309 Deallocated Read Value: All 0x00 00:09:52.309 Deallocate in Write Zeroes: Not Supported 00:09:52.309 Deallocated Guard Field: 0xFFFF 00:09:52.309 Flush: Supported 00:09:52.309 Reservation: Not Supported 00:09:52.309 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.309 Size (in LBAs): 262144 (1GiB) 00:09:52.309 Capacity (in LBAs): 262144 (1GiB) 00:09:52.309 Utilization (in LBAs): 262144 (1GiB) 00:09:52.309 Thin Provisioning: Not Supported 00:09:52.309 Per-NS Atomic Units: No 00:09:52.309 Maximum Single Source Range Length: 128 00:09:52.309 Maximum Copy Length: 128 00:09:52.309 Maximum Source Range Count: 128 00:09:52.309 NGUID/EUI64 Never Reused: No 00:09:52.309 Namespace Write Protected: No 00:09:52.309 Endurance group ID: 1 00:09:52.309 Number of LBA Formats: 8 00:09:52.309 Current LBA Format: LBA Format #04 00:09:52.309 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.309 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.309 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.309 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.309 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.309 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.309 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.309 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.309 00:09:52.309 Get Feature FDP: 00:09:52.309 ================ 00:09:52.309 Enabled: Yes 00:09:52.309 FDP configuration index: 0 00:09:52.309 00:09:52.309 FDP configurations log page 00:09:52.309 =========================== 00:09:52.309 Number of FDP configurations: 1 00:09:52.309 Version: 0 00:09:52.309 Size: 112 00:09:52.309 FDP Configuration Descriptor: 0 00:09:52.309 Descriptor Size: 96 00:09:52.309 Reclaim Group Identifier format: 2 00:09:52.309 FDP Volatile Write Cache: Not Present 00:09:52.309 FDP Configuration: Valid 00:09:52.309 Vendor Specific Size: 0 00:09:52.309 Number of Reclaim Groups: 2 00:09:52.309 Number of Recalim Unit Handles: 8 00:09:52.309 Max Placement Identifiers: 128 00:09:52.309 Number of Namespaces Suppprted: 256 00:09:52.309 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.309 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.309 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.309 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.309 00:09:52.309 FDP reclaim unit handle usage log page 00:09:52.309 ====================================== 00:09:52.309 Number of Reclaim Unit Handles: 8 00:09:52.309 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.309 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.309 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.309 00:09:52.309 FDP statistics log page 00:09:52.309 ======================= 00:09:52.309 Host bytes with metadata written: 427532288 00:09:52.309 Medi[2024-07-15 17:16:02.922720] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 82068 terminated unexpected 00:09:52.309 a bytes with metadata written: 427577344 00:09:52.309 Media bytes erased: 0 00:09:52.309 00:09:52.309 FDP events log page 00:09:52.309 =================== 00:09:52.309 Number of FDP events: 0 00:09:52.309 00:09:52.309 NVM Specific Namespace Data 00:09:52.309 =========================== 00:09:52.309 Logical Block Storage Tag Mask: 0 00:09:52.309 Protection Information Capabilities: 00:09:52.309 16b Guard Protection Information Storage Tag Support: No 00:09:52.309 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.309 Storage Tag Check Read Support: No 00:09:52.309 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.309 ===================================================== 00:09:52.309 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.309 ===================================================== 00:09:52.309 Controller Capabilities/Features 00:09:52.309 ================================ 00:09:52.309 Vendor ID: 1b36 00:09:52.309 Subsystem Vendor ID: 1af4 00:09:52.309 Serial Number: 12342 00:09:52.309 Model Number: QEMU NVMe Ctrl 00:09:52.309 Firmware Version: 8.0.0 00:09:52.309 Recommended Arb Burst: 6 00:09:52.309 IEEE OUI Identifier: 00 54 52 00:09:52.309 Multi-path I/O 00:09:52.309 May have multiple subsystem ports: No 00:09:52.309 May have multiple controllers: No 00:09:52.309 Associated with SR-IOV VF: No 00:09:52.309 Max Data Transfer Size: 524288 00:09:52.309 Max Number of Namespaces: 256 00:09:52.309 Max Number of I/O Queues: 64 00:09:52.309 NVMe Specification Version (VS): 1.4 00:09:52.309 NVMe Specification Version (Identify): 1.4 00:09:52.309 Maximum Queue Entries: 2048 00:09:52.309 Contiguous Queues Required: Yes 00:09:52.309 Arbitration Mechanisms Supported 00:09:52.309 Weighted Round Robin: Not Supported 00:09:52.309 Vendor Specific: Not Supported 00:09:52.309 Reset Timeout: 7500 ms 00:09:52.309 Doorbell Stride: 4 bytes 00:09:52.310 NVM Subsystem Reset: Not Supported 00:09:52.310 Command Sets Supported 00:09:52.310 NVM Command Set: Supported 00:09:52.310 Boot Partition: Not Supported 00:09:52.310 Memory Page Size Minimum: 4096 bytes 00:09:52.310 Memory Page Size Maximum: 65536 bytes 00:09:52.310 Persistent Memory Region: Not Supported 00:09:52.310 Optional Asynchronous Events Supported 00:09:52.310 Namespace Attribute Notices: Supported 00:09:52.310 Firmware Activation Notices: Not Supported 00:09:52.310 ANA Change Notices: Not Supported 00:09:52.310 PLE Aggregate Log Change Notices: Not Supported 00:09:52.310 LBA Status Info Alert Notices: Not Supported 00:09:52.310 EGE Aggregate Log Change Notices: Not Supported 00:09:52.310 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.310 Zone Descriptor Change Notices: Not Supported 00:09:52.310 Discovery Log Change Notices: Not Supported 00:09:52.310 Controller Attributes 00:09:52.310 128-bit Host Identifier: Not Supported 00:09:52.310 Non-Operational Permissive Mode: Not Supported 00:09:52.310 NVM Sets: Not Supported 00:09:52.310 Read Recovery Levels: Not Supported 00:09:52.310 Endurance Groups: Not Supported 00:09:52.310 Predictable Latency Mode: Not Supported 00:09:52.310 Traffic Based Keep ALive: Not Supported 00:09:52.310 Namespace Granularity: Not Supported 00:09:52.310 SQ Associations: Not Supported 00:09:52.310 UUID List: Not Supported 00:09:52.310 Multi-Domain Subsystem: Not Supported 00:09:52.310 Fixed Capacity Management: Not Supported 00:09:52.310 Variable Capacity Management: Not Supported 00:09:52.310 Delete Endurance Group: Not Supported 00:09:52.310 Delete NVM Set: Not Supported 00:09:52.310 Extended LBA Formats Supported: Supported 00:09:52.310 Flexible Data Placement Supported: Not Supported 00:09:52.310 00:09:52.310 Controller Memory Buffer Support 00:09:52.310 ================================ 00:09:52.310 Supported: No 00:09:52.310 00:09:52.310 Persistent Memory Region Support 00:09:52.310 ================================ 00:09:52.310 Supported: No 00:09:52.310 00:09:52.310 Admin Command Set Attributes 00:09:52.310 ============================ 00:09:52.310 Security Send/Receive: Not Supported 00:09:52.310 Format NVM: Supported 00:09:52.310 Firmware Activate/Download: Not Supported 00:09:52.310 Namespace Management: Supported 00:09:52.310 Device Self-Test: Not Supported 00:09:52.310 Directives: Supported 00:09:52.310 NVMe-MI: Not Supported 00:09:52.310 Virtualization Management: Not Supported 00:09:52.310 Doorbell Buffer Config: Supported 00:09:52.310 Get LBA Status Capability: Not Supported 00:09:52.310 Command & Feature Lockdown Capability: Not Supported 00:09:52.310 Abort Command Limit: 4 00:09:52.310 Async Event Request Limit: 4 00:09:52.310 Number of Firmware Slots: N/A 00:09:52.310 Firmware Slot 1 Read-Only: N/A 00:09:52.310 Firmware Activation Without Reset: N/A 00:09:52.310 Multiple Update Detection Support: N/A 00:09:52.310 Firmware Update Granularity: No Information Provided 00:09:52.310 Per-Namespace SMART Log: Yes 00:09:52.310 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.310 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:52.310 Command Effects Log Page: Supported 00:09:52.310 Get Log Page Extended Data: Supported 00:09:52.310 Telemetry Log Pages: Not Supported 00:09:52.310 Persistent Event Log Pages: Not Supported 00:09:52.310 Supported Log Pages Log Page: May Support 00:09:52.310 Commands Supported & Effects Log Page: Not Supported 00:09:52.310 Feature Identifiers & Effects Log Page:May Support 00:09:52.310 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.310 Data Area 4 for Telemetry Log: Not Supported 00:09:52.310 Error Log Page Entries Supported: 1 00:09:52.310 Keep Alive: Not Supported 00:09:52.310 00:09:52.310 NVM Command Set Attributes 00:09:52.310 ========================== 00:09:52.310 Submission Queue Entry Size 00:09:52.310 Max: 64 00:09:52.310 Min: 64 00:09:52.310 Completion Queue Entry Size 00:09:52.310 Max: 16 00:09:52.310 Min: 16 00:09:52.310 Number of Namespaces: 256 00:09:52.310 Compare Command: Supported 00:09:52.310 Write Uncorrectable Command: Not Supported 00:09:52.310 Dataset Management Command: Supported 00:09:52.310 Write Zeroes Command: Supported 00:09:52.310 Set Features Save Field: Supported 00:09:52.310 Reservations: Not Supported 00:09:52.310 Timestamp: Supported 00:09:52.310 Copy: Supported 00:09:52.310 Volatile Write Cache: Present 00:09:52.310 Atomic Write Unit (Normal): 1 00:09:52.310 Atomic Write Unit (PFail): 1 00:09:52.310 Atomic Compare & Write Unit: 1 00:09:52.310 Fused Compare & Write: Not Supported 00:09:52.310 Scatter-Gather List 00:09:52.310 SGL Command Set: Supported 00:09:52.310 SGL Keyed: Not Supported 00:09:52.310 SGL Bit Bucket Descriptor: Not Supported 00:09:52.310 SGL Metadata Pointer: Not Supported 00:09:52.310 Oversized SGL: Not Supported 00:09:52.310 SGL Metadata Address: Not Supported 00:09:52.310 SGL Offset: Not Supported 00:09:52.310 Transport SGL Data Block: Not Supported 00:09:52.310 Replay Protected Memory Block: Not Supported 00:09:52.310 00:09:52.310 Firmware Slot Information 00:09:52.310 ========================= 00:09:52.310 Active slot: 1 00:09:52.310 Slot 1 Firmware Revision: 1.0 00:09:52.310 00:09:52.310 00:09:52.310 Commands Supported and Effects 00:09:52.310 ============================== 00:09:52.310 Admin Commands 00:09:52.310 -------------- 00:09:52.310 Delete I/O Submission Queue (00h): Supported 00:09:52.310 Create I/O Submission Queue (01h): Supported 00:09:52.310 Get Log Page (02h): Supported 00:09:52.310 Delete I/O Completion Queue (04h): Supported 00:09:52.310 Create I/O Completion Queue (05h): Supported 00:09:52.310 Identify (06h): Supported 00:09:52.310 Abort (08h): Supported 00:09:52.310 Set Features (09h): Supported 00:09:52.310 Get Features (0Ah): Supported 00:09:52.310 Asynchronous Event Request (0Ch): Supported 00:09:52.310 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.310 Directive Send (19h): Supported 00:09:52.310 Directive Receive (1Ah): Supported 00:09:52.310 Virtualization Management (1Ch): Supported 00:09:52.310 Doorbell Buffer Config (7Ch): Supported 00:09:52.310 Format NVM (80h): Supported LBA-Change 00:09:52.310 I/O Commands 00:09:52.310 ------------ 00:09:52.310 Flush (00h): Supported LBA-Change 00:09:52.310 Write (01h): Supported LBA-Change 00:09:52.310 Read (02h): Supported 00:09:52.310 Compare (05h): Supported 00:09:52.310 Write Zeroes (08h): Supported LBA-Change 00:09:52.310 Dataset Management (09h): Supported LBA-Change 00:09:52.310 Unknown (0Ch): Supported 00:09:52.310 Unknown (12h): Supported 00:09:52.310 Copy (19h): Supported LBA-Change 00:09:52.310 Unknown (1Dh): Supported LBA-Change 00:09:52.310 00:09:52.310 Error Log 00:09:52.310 ========= 00:09:52.310 00:09:52.310 Arbitration 00:09:52.310 =========== 00:09:52.310 Arbitration Burst: no limit 00:09:52.310 00:09:52.310 Power Management 00:09:52.310 ================ 00:09:52.310 Number of Power States: 1 00:09:52.310 Current Power State: Power State #0 00:09:52.310 Power State #0: 00:09:52.310 Max Power: 25.00 W 00:09:52.310 Non-Operational State: Operational 00:09:52.310 Entry Latency: 16 microseconds 00:09:52.310 Exit Latency: 4 microseconds 00:09:52.310 Relative Read Throughput: 0 00:09:52.310 Relative Read Latency: 0 00:09:52.310 Relative Write Throughput: 0 00:09:52.310 Relative Write Latency: 0 00:09:52.310 Idle Power: Not Reported 00:09:52.310 Active Power: Not Reported 00:09:52.310 Non-Operational Permissive Mode: Not Supported 00:09:52.310 00:09:52.310 Health Information 00:09:52.310 ================== 00:09:52.311 Critical Warnings: 00:09:52.311 Available Spare Space: OK 00:09:52.311 Temperature: OK 00:09:52.311 Device Reliability: OK 00:09:52.311 Read Only: No 00:09:52.311 Volatile Memory Backup: OK 00:09:52.311 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.311 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.311 Available Spare: 0% 00:09:52.311 Available Spare Threshold: 0% 00:09:52.311 Life Percentage Used: 0% 00:09:52.311 Data Units Read: 2201 00:09:52.311 Data Units Written: 1882 00:09:52.311 Host Read Commands: 103522 00:09:52.311 Host Write Commands: 99292 00:09:52.311 Controller Busy Time: 0 minutes 00:09:52.311 Power Cycles: 0 00:09:52.311 Power On Hours: 0 hours 00:09:52.311 Unsafe Shutdowns: 0 00:09:52.311 Unrecoverable Media Errors: 0 00:09:52.311 Lifetime Error Log Entries: 0 00:09:52.311 Warning Temperature Time: 0 minutes 00:09:52.311 Critical Temperature Time: 0 minutes 00:09:52.311 00:09:52.311 Number of Queues 00:09:52.311 ================ 00:09:52.311 Number of I/O Submission Queues: 64 00:09:52.311 Number of I/O Completion Queues: 64 00:09:52.311 00:09:52.311 ZNS Specific Controller Data 00:09:52.311 ============================ 00:09:52.311 Zone Append Size Limit: 0 00:09:52.311 00:09:52.311 00:09:52.311 Active Namespaces 00:09:52.311 ================= 00:09:52.311 Namespace ID:1 00:09:52.311 Error Recovery Timeout: Unlimited 00:09:52.311 Command Set Identifier: NVM (00h) 00:09:52.311 Deallocate: Supported 00:09:52.311 Deallocated/Unwritten Error: Supported 00:09:52.311 Deallocated Read Value: All 0x00 00:09:52.311 Deallocate in Write Zeroes: Not Supported 00:09:52.311 Deallocated Guard Field: 0xFFFF 00:09:52.311 Flush: Supported 00:09:52.311 Reservation: Not Supported 00:09:52.311 Namespace Sharing Capabilities: Private 00:09:52.311 Size (in LBAs): 1048576 (4GiB) 00:09:52.311 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.311 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.311 Thin Provisioning: Not Supported 00:09:52.311 Per-NS Atomic Units: No 00:09:52.311 Maximum Single Source Range Length: 128 00:09:52.311 Maximum Copy Length: 128 00:09:52.311 Maximum Source Range Count: 128 00:09:52.311 NGUID/EUI64 Never Reused: No 00:09:52.311 Namespace Write Protected: No 00:09:52.311 Number of LBA Formats: 8 00:09:52.311 Current LBA Format: LBA Format #04 00:09:52.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.311 00:09:52.311 NVM Specific Namespace Data 00:09:52.311 =========================== 00:09:52.311 Logical Block Storage Tag Mask: 0 00:09:52.311 Protection Information Capabilities: 00:09:52.311 16b Guard Protection Information Storage Tag Support: No 00:09:52.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.311 Storage Tag Check Read Support: No 00:09:52.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Namespace ID:2 00:09:52.311 Error Recovery Timeout: Unlimited 00:09:52.311 Command Set Identifier: NVM (00h) 00:09:52.311 Deallocate: Supported 00:09:52.311 Deallocated/Unwritten Error: Supported 00:09:52.311 Deallocated Read Value: All 0x00 00:09:52.311 Deallocate in Write Zeroes: Not Supported 00:09:52.311 Deallocated Guard Field: 0xFFFF 00:09:52.311 Flush: Supported 00:09:52.311 Reservation: Not Supported 00:09:52.311 Namespace Sharing Capabilities: Private 00:09:52.311 Size (in LBAs): 1048576 (4GiB) 00:09:52.311 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.311 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.311 Thin Provisioning: Not Supported 00:09:52.311 Per-NS Atomic Units: No 00:09:52.311 Maximum Single Source Range Length: 128 00:09:52.311 Maximum Copy Length: 128 00:09:52.311 Maximum Source Range Count: 128 00:09:52.311 NGUID/EUI64 Never Reused: No 00:09:52.311 Namespace Write Protected: No 00:09:52.311 Number of LBA Formats: 8 00:09:52.311 Current LBA Format: LBA Format #04 00:09:52.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.311 00:09:52.311 NVM Specific Namespace Data 00:09:52.311 =========================== 00:09:52.311 Logical Block Storage Tag Mask: 0 00:09:52.311 Protection Information Capabilities: 00:09:52.311 16b Guard Protection Information Storage Tag Support: No 00:09:52.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.311 Storage Tag Check Read Support: No 00:09:52.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Namespace ID:3 00:09:52.311 Error Recovery Timeout: Unlimited 00:09:52.311 Command Set Identifier: NVM (00h) 00:09:52.311 Deallocate: Supported 00:09:52.311 Deallocated/Unwritten Error: Supported 00:09:52.311 Deallocated Read Value: All 0x00 00:09:52.311 Deallocate in Write Zeroes: Not Supported 00:09:52.311 Deallocated Guard Field: 0xFFFF 00:09:52.311 Flush: Supported 00:09:52.311 Reservation: Not Supported 00:09:52.311 Namespace Sharing Capabilities: Private 00:09:52.311 Size (in LBAs): 1048576 (4GiB) 00:09:52.311 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.311 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.311 Thin Provisioning: Not Supported 00:09:52.311 Per-NS Atomic Units: No 00:09:52.311 Maximum Single Source Range Length: 128 00:09:52.311 Maximum Copy Length: 128 00:09:52.311 Maximum Source Range Count: 128 00:09:52.311 NGUID/EUI64 Never Reused: No 00:09:52.311 Namespace Write Protected: No 00:09:52.311 Number of LBA Formats: 8 00:09:52.311 Current LBA Format: LBA Format #04 00:09:52.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.311 00:09:52.311 NVM Specific Namespace Data 00:09:52.311 =========================== 00:09:52.311 Logical Block Storage Tag Mask: 0 00:09:52.311 Protection Information Capabilities: 00:09:52.311 16b Guard Protection Information Storage Tag Support: No 00:09:52.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.311 Storage Tag Check Read Support: No 00:09:52.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.311 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.311 17:16:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:52.601 ===================================================== 00:09:52.601 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.601 ===================================================== 00:09:52.601 Controller Capabilities/Features 00:09:52.601 ================================ 00:09:52.601 Vendor ID: 1b36 00:09:52.601 Subsystem Vendor ID: 1af4 00:09:52.601 Serial Number: 12340 00:09:52.601 Model Number: QEMU NVMe Ctrl 00:09:52.601 Firmware Version: 8.0.0 00:09:52.601 Recommended Arb Burst: 6 00:09:52.601 IEEE OUI Identifier: 00 54 52 00:09:52.601 Multi-path I/O 00:09:52.601 May have multiple subsystem ports: No 00:09:52.601 May have multiple controllers: No 00:09:52.601 Associated with SR-IOV VF: No 00:09:52.601 Max Data Transfer Size: 524288 00:09:52.601 Max Number of Namespaces: 256 00:09:52.601 Max Number of I/O Queues: 64 00:09:52.601 NVMe Specification Version (VS): 1.4 00:09:52.601 NVMe Specification Version (Identify): 1.4 00:09:52.601 Maximum Queue Entries: 2048 00:09:52.601 Contiguous Queues Required: Yes 00:09:52.601 Arbitration Mechanisms Supported 00:09:52.601 Weighted Round Robin: Not Supported 00:09:52.601 Vendor Specific: Not Supported 00:09:52.601 Reset Timeout: 7500 ms 00:09:52.601 Doorbell Stride: 4 bytes 00:09:52.601 NVM Subsystem Reset: Not Supported 00:09:52.601 Command Sets Supported 00:09:52.601 NVM Command Set: Supported 00:09:52.601 Boot Partition: Not Supported 00:09:52.601 Memory Page Size Minimum: 4096 bytes 00:09:52.601 Memory Page Size Maximum: 65536 bytes 00:09:52.601 Persistent Memory Region: Not Supported 00:09:52.601 Optional Asynchronous Events Supported 00:09:52.601 Namespace Attribute Notices: Supported 00:09:52.601 Firmware Activation Notices: Not Supported 00:09:52.601 ANA Change Notices: Not Supported 00:09:52.601 PLE Aggregate Log Change Notices: Not Supported 00:09:52.601 LBA Status Info Alert Notices: Not Supported 00:09:52.601 EGE Aggregate Log Change Notices: Not Supported 00:09:52.601 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.601 Zone Descriptor Change Notices: Not Supported 00:09:52.601 Discovery Log Change Notices: Not Supported 00:09:52.601 Controller Attributes 00:09:52.601 128-bit Host Identifier: Not Supported 00:09:52.601 Non-Operational Permissive Mode: Not Supported 00:09:52.601 NVM Sets: Not Supported 00:09:52.601 Read Recovery Levels: Not Supported 00:09:52.601 Endurance Groups: Not Supported 00:09:52.601 Predictable Latency Mode: Not Supported 00:09:52.601 Traffic Based Keep ALive: Not Supported 00:09:52.601 Namespace Granularity: Not Supported 00:09:52.601 SQ Associations: Not Supported 00:09:52.601 UUID List: Not Supported 00:09:52.601 Multi-Domain Subsystem: Not Supported 00:09:52.601 Fixed Capacity Management: Not Supported 00:09:52.601 Variable Capacity Management: Not Supported 00:09:52.601 Delete Endurance Group: Not Supported 00:09:52.601 Delete NVM Set: Not Supported 00:09:52.601 Extended LBA Formats Supported: Supported 00:09:52.601 Flexible Data Placement Supported: Not Supported 00:09:52.601 00:09:52.601 Controller Memory Buffer Support 00:09:52.601 ================================ 00:09:52.601 Supported: No 00:09:52.601 00:09:52.601 Persistent Memory Region Support 00:09:52.601 ================================ 00:09:52.601 Supported: No 00:09:52.601 00:09:52.601 Admin Command Set Attributes 00:09:52.601 ============================ 00:09:52.601 Security Send/Receive: Not Supported 00:09:52.601 Format NVM: Supported 00:09:52.601 Firmware Activate/Download: Not Supported 00:09:52.601 Namespace Management: Supported 00:09:52.601 Device Self-Test: Not Supported 00:09:52.601 Directives: Supported 00:09:52.601 NVMe-MI: Not Supported 00:09:52.601 Virtualization Management: Not Supported 00:09:52.601 Doorbell Buffer Config: Supported 00:09:52.601 Get LBA Status Capability: Not Supported 00:09:52.601 Command & Feature Lockdown Capability: Not Supported 00:09:52.601 Abort Command Limit: 4 00:09:52.601 Async Event Request Limit: 4 00:09:52.601 Number of Firmware Slots: N/A 00:09:52.601 Firmware Slot 1 Read-Only: N/A 00:09:52.601 Firmware Activation Without Reset: N/A 00:09:52.601 Multiple Update Detection Support: N/A 00:09:52.601 Firmware Update Granularity: No Information Provided 00:09:52.601 Per-Namespace SMART Log: Yes 00:09:52.601 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.601 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.601 Command Effects Log Page: Supported 00:09:52.601 Get Log Page Extended Data: Supported 00:09:52.601 Telemetry Log Pages: Not Supported 00:09:52.601 Persistent Event Log Pages: Not Supported 00:09:52.601 Supported Log Pages Log Page: May Support 00:09:52.602 Commands Supported & Effects Log Page: Not Supported 00:09:52.602 Feature Identifiers & Effects Log Page:May Support 00:09:52.602 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.602 Data Area 4 for Telemetry Log: Not Supported 00:09:52.602 Error Log Page Entries Supported: 1 00:09:52.602 Keep Alive: Not Supported 00:09:52.602 00:09:52.602 NVM Command Set Attributes 00:09:52.602 ========================== 00:09:52.602 Submission Queue Entry Size 00:09:52.602 Max: 64 00:09:52.602 Min: 64 00:09:52.602 Completion Queue Entry Size 00:09:52.602 Max: 16 00:09:52.602 Min: 16 00:09:52.602 Number of Namespaces: 256 00:09:52.602 Compare Command: Supported 00:09:52.602 Write Uncorrectable Command: Not Supported 00:09:52.602 Dataset Management Command: Supported 00:09:52.602 Write Zeroes Command: Supported 00:09:52.602 Set Features Save Field: Supported 00:09:52.602 Reservations: Not Supported 00:09:52.602 Timestamp: Supported 00:09:52.603 Copy: Supported 00:09:52.603 Volatile Write Cache: Present 00:09:52.603 Atomic Write Unit (Normal): 1 00:09:52.603 Atomic Write Unit (PFail): 1 00:09:52.603 Atomic Compare & Write Unit: 1 00:09:52.603 Fused Compare & Write: Not Supported 00:09:52.603 Scatter-Gather List 00:09:52.603 SGL Command Set: Supported 00:09:52.603 SGL Keyed: Not Supported 00:09:52.603 SGL Bit Bucket Descriptor: Not Supported 00:09:52.603 SGL Metadata Pointer: Not Supported 00:09:52.603 Oversized SGL: Not Supported 00:09:52.603 SGL Metadata Address: Not Supported 00:09:52.603 SGL Offset: Not Supported 00:09:52.603 Transport SGL Data Block: Not Supported 00:09:52.603 Replay Protected Memory Block: Not Supported 00:09:52.603 00:09:52.603 Firmware Slot Information 00:09:52.603 ========================= 00:09:52.603 Active slot: 1 00:09:52.603 Slot 1 Firmware Revision: 1.0 00:09:52.603 00:09:52.603 00:09:52.603 Commands Supported and Effects 00:09:52.603 ============================== 00:09:52.603 Admin Commands 00:09:52.603 -------------- 00:09:52.603 Delete I/O Submission Queue (00h): Supported 00:09:52.603 Create I/O Submission Queue (01h): Supported 00:09:52.603 Get Log Page (02h): Supported 00:09:52.603 Delete I/O Completion Queue (04h): Supported 00:09:52.603 Create I/O Completion Queue (05h): Supported 00:09:52.603 Identify (06h): Supported 00:09:52.603 Abort (08h): Supported 00:09:52.603 Set Features (09h): Supported 00:09:52.603 Get Features (0Ah): Supported 00:09:52.603 Asynchronous Event Request (0Ch): Supported 00:09:52.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.603 Directive Send (19h): Supported 00:09:52.604 Directive Receive (1Ah): Supported 00:09:52.604 Virtualization Management (1Ch): Supported 00:09:52.604 Doorbell Buffer Config (7Ch): Supported 00:09:52.604 Format NVM (80h): Supported LBA-Change 00:09:52.604 I/O Commands 00:09:52.604 ------------ 00:09:52.604 Flush (00h): Supported LBA-Change 00:09:52.604 Write (01h): Supported LBA-Change 00:09:52.604 Read (02h): Supported 00:09:52.604 Compare (05h): Supported 00:09:52.604 Write Zeroes (08h): Supported LBA-Change 00:09:52.604 Dataset Management (09h): Supported LBA-Change 00:09:52.604 Unknown (0Ch): Supported 00:09:52.604 Unknown (12h): Supported 00:09:52.604 Copy (19h): Supported LBA-Change 00:09:52.604 Unknown (1Dh): Supported LBA-Change 00:09:52.604 00:09:52.604 Error Log 00:09:52.604 ========= 00:09:52.604 00:09:52.604 Arbitration 00:09:52.604 =========== 00:09:52.604 Arbitration Burst: no limit 00:09:52.604 00:09:52.604 Power Management 00:09:52.604 ================ 00:09:52.604 Number of Power States: 1 00:09:52.604 Current Power State: Power State #0 00:09:52.604 Power State #0: 00:09:52.604 Max Power: 25.00 W 00:09:52.604 Non-Operational State: Operational 00:09:52.604 Entry Latency: 16 microseconds 00:09:52.604 Exit Latency: 4 microseconds 00:09:52.604 Relative Read Throughput: 0 00:09:52.604 Relative Read Latency: 0 00:09:52.604 Relative Write Throughput: 0 00:09:52.604 Relative Write Latency: 0 00:09:52.604 Idle Power: Not Reported 00:09:52.604 Active Power: Not Reported 00:09:52.604 Non-Operational Permissive Mode: Not Supported 00:09:52.604 00:09:52.604 Health Information 00:09:52.604 ================== 00:09:52.604 Critical Warnings: 00:09:52.604 Available Spare Space: OK 00:09:52.604 Temperature: OK 00:09:52.604 Device Reliability: OK 00:09:52.604 Read Only: No 00:09:52.604 Volatile Memory Backup: OK 00:09:52.604 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.604 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.604 Available Spare: 0% 00:09:52.604 Available Spare Threshold: 0% 00:09:52.604 Life Percentage Used: 0% 00:09:52.604 Data Units Read: 1041 00:09:52.604 Data Units Written: 874 00:09:52.604 Host Read Commands: 49548 00:09:52.604 Host Write Commands: 48056 00:09:52.604 Controller Busy Time: 0 minutes 00:09:52.604 Power Cycles: 0 00:09:52.604 Power On Hours: 0 hours 00:09:52.604 Unsafe Shutdowns: 0 00:09:52.604 Unrecoverable Media Errors: 0 00:09:52.604 Lifetime Error Log Entries: 0 00:09:52.604 Warning Temperature Time: 0 minutes 00:09:52.604 Critical Temperature Time: 0 minutes 00:09:52.604 00:09:52.604 Number of Queues 00:09:52.604 ================ 00:09:52.604 Number of I/O Submission Queues: 64 00:09:52.604 Number of I/O Completion Queues: 64 00:09:52.604 00:09:52.604 ZNS Specific Controller Data 00:09:52.604 ============================ 00:09:52.604 Zone Append Size Limit: 0 00:09:52.604 00:09:52.604 00:09:52.604 Active Namespaces 00:09:52.604 ================= 00:09:52.604 Namespace ID:1 00:09:52.604 Error Recovery Timeout: Unlimited 00:09:52.604 Command Set Identifier: NVM (00h) 00:09:52.604 Deallocate: Supported 00:09:52.604 Deallocated/Unwritten Error: Supported 00:09:52.604 Deallocated Read Value: All 0x00 00:09:52.604 Deallocate in Write Zeroes: Not Supported 00:09:52.604 Deallocated Guard Field: 0xFFFF 00:09:52.604 Flush: Supported 00:09:52.604 Reservation: Not Supported 00:09:52.604 Metadata Transferred as: Separate Metadata Buffer 00:09:52.604 Namespace Sharing Capabilities: Private 00:09:52.604 Size (in LBAs): 1548666 (5GiB) 00:09:52.604 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.604 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.604 Thin Provisioning: Not Supported 00:09:52.604 Per-NS Atomic Units: No 00:09:52.604 Maximum Single Source Range Length: 128 00:09:52.604 Maximum Copy Length: 128 00:09:52.604 Maximum Source Range Count: 128 00:09:52.604 NGUID/EUI64 Never Reused: No 00:09:52.604 Namespace Write Protected: No 00:09:52.604 Number of LBA Formats: 8 00:09:52.604 Current LBA Format: LBA Format #07 00:09:52.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.604 00:09:52.604 NVM Specific Namespace Data 00:09:52.604 =========================== 00:09:52.604 Logical Block Storage Tag Mask: 0 00:09:52.604 Protection Information Capabilities: 00:09:52.604 16b Guard Protection Information Storage Tag Support: No 00:09:52.604 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.604 Storage Tag Check Read Support: No 00:09:52.605 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.605 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.605 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:52.925 ===================================================== 00:09:52.925 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.925 ===================================================== 00:09:52.925 Controller Capabilities/Features 00:09:52.925 ================================ 00:09:52.925 Vendor ID: 1b36 00:09:52.925 Subsystem Vendor ID: 1af4 00:09:52.925 Serial Number: 12341 00:09:52.925 Model Number: QEMU NVMe Ctrl 00:09:52.925 Firmware Version: 8.0.0 00:09:52.925 Recommended Arb Burst: 6 00:09:52.925 IEEE OUI Identifier: 00 54 52 00:09:52.925 Multi-path I/O 00:09:52.925 May have multiple subsystem ports: No 00:09:52.925 May have multiple controllers: No 00:09:52.925 Associated with SR-IOV VF: No 00:09:52.925 Max Data Transfer Size: 524288 00:09:52.925 Max Number of Namespaces: 256 00:09:52.925 Max Number of I/O Queues: 64 00:09:52.925 NVMe Specification Version (VS): 1.4 00:09:52.925 NVMe Specification Version (Identify): 1.4 00:09:52.925 Maximum Queue Entries: 2048 00:09:52.925 Contiguous Queues Required: Yes 00:09:52.925 Arbitration Mechanisms Supported 00:09:52.925 Weighted Round Robin: Not Supported 00:09:52.925 Vendor Specific: Not Supported 00:09:52.925 Reset Timeout: 7500 ms 00:09:52.925 Doorbell Stride: 4 bytes 00:09:52.925 NVM Subsystem Reset: Not Supported 00:09:52.925 Command Sets Supported 00:09:52.925 NVM Command Set: Supported 00:09:52.925 Boot Partition: Not Supported 00:09:52.925 Memory Page Size Minimum: 4096 bytes 00:09:52.925 Memory Page Size Maximum: 65536 bytes 00:09:52.925 Persistent Memory Region: Not Supported 00:09:52.925 Optional Asynchronous Events Supported 00:09:52.925 Namespace Attribute Notices: Supported 00:09:52.925 Firmware Activation Notices: Not Supported 00:09:52.925 ANA Change Notices: Not Supported 00:09:52.925 PLE Aggregate Log Change Notices: Not Supported 00:09:52.925 LBA Status Info Alert Notices: Not Supported 00:09:52.925 EGE Aggregate Log Change Notices: Not Supported 00:09:52.925 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.925 Zone Descriptor Change Notices: Not Supported 00:09:52.925 Discovery Log Change Notices: Not Supported 00:09:52.925 Controller Attributes 00:09:52.925 128-bit Host Identifier: Not Supported 00:09:52.925 Non-Operational Permissive Mode: Not Supported 00:09:52.925 NVM Sets: Not Supported 00:09:52.925 Read Recovery Levels: Not Supported 00:09:52.925 Endurance Groups: Not Supported 00:09:52.925 Predictable Latency Mode: Not Supported 00:09:52.925 Traffic Based Keep ALive: Not Supported 00:09:52.925 Namespace Granularity: Not Supported 00:09:52.925 SQ Associations: Not Supported 00:09:52.925 UUID List: Not Supported 00:09:52.925 Multi-Domain Subsystem: Not Supported 00:09:52.925 Fixed Capacity Management: Not Supported 00:09:52.925 Variable Capacity Management: Not Supported 00:09:52.925 Delete Endurance Group: Not Supported 00:09:52.925 Delete NVM Set: Not Supported 00:09:52.925 Extended LBA Formats Supported: Supported 00:09:52.925 Flexible Data Placement Supported: Not Supported 00:09:52.925 00:09:52.925 Controller Memory Buffer Support 00:09:52.925 ================================ 00:09:52.925 Supported: No 00:09:52.925 00:09:52.925 Persistent Memory Region Support 00:09:52.925 ================================ 00:09:52.925 Supported: No 00:09:52.925 00:09:52.925 Admin Command Set Attributes 00:09:52.925 ============================ 00:09:52.925 Security Send/Receive: Not Supported 00:09:52.925 Format NVM: Supported 00:09:52.925 Firmware Activate/Download: Not Supported 00:09:52.925 Namespace Management: Supported 00:09:52.925 Device Self-Test: Not Supported 00:09:52.925 Directives: Supported 00:09:52.925 NVMe-MI: Not Supported 00:09:52.925 Virtualization Management: Not Supported 00:09:52.925 Doorbell Buffer Config: Supported 00:09:52.925 Get LBA Status Capability: Not Supported 00:09:52.925 Command & Feature Lockdown Capability: Not Supported 00:09:52.925 Abort Command Limit: 4 00:09:52.925 Async Event Request Limit: 4 00:09:52.925 Number of Firmware Slots: N/A 00:09:52.925 Firmware Slot 1 Read-Only: N/A 00:09:52.925 Firmware Activation Without Reset: N/A 00:09:52.925 Multiple Update Detection Support: N/A 00:09:52.925 Firmware Update Granularity: No Information Provided 00:09:52.925 Per-Namespace SMART Log: Yes 00:09:52.925 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.925 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.925 Command Effects Log Page: Supported 00:09:52.925 Get Log Page Extended Data: Supported 00:09:52.925 Telemetry Log Pages: Not Supported 00:09:52.925 Persistent Event Log Pages: Not Supported 00:09:52.925 Supported Log Pages Log Page: May Support 00:09:52.925 Commands Supported & Effects Log Page: Not Supported 00:09:52.925 Feature Identifiers & Effects Log Page:May Support 00:09:52.925 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.925 Data Area 4 for Telemetry Log: Not Supported 00:09:52.925 Error Log Page Entries Supported: 1 00:09:52.925 Keep Alive: Not Supported 00:09:52.925 00:09:52.925 NVM Command Set Attributes 00:09:52.925 ========================== 00:09:52.925 Submission Queue Entry Size 00:09:52.925 Max: 64 00:09:52.925 Min: 64 00:09:52.925 Completion Queue Entry Size 00:09:52.925 Max: 16 00:09:52.925 Min: 16 00:09:52.925 Number of Namespaces: 256 00:09:52.925 Compare Command: Supported 00:09:52.925 Write Uncorrectable Command: Not Supported 00:09:52.925 Dataset Management Command: Supported 00:09:52.925 Write Zeroes Command: Supported 00:09:52.925 Set Features Save Field: Supported 00:09:52.925 Reservations: Not Supported 00:09:52.926 Timestamp: Supported 00:09:52.926 Copy: Supported 00:09:52.926 Volatile Write Cache: Present 00:09:52.926 Atomic Write Unit (Normal): 1 00:09:52.926 Atomic Write Unit (PFail): 1 00:09:52.926 Atomic Compare & Write Unit: 1 00:09:52.926 Fused Compare & Write: Not Supported 00:09:52.926 Scatter-Gather List 00:09:52.926 SGL Command Set: Supported 00:09:52.926 SGL Keyed: Not Supported 00:09:52.926 SGL Bit Bucket Descriptor: Not Supported 00:09:52.926 SGL Metadata Pointer: Not Supported 00:09:52.926 Oversized SGL: Not Supported 00:09:52.926 SGL Metadata Address: Not Supported 00:09:52.926 SGL Offset: Not Supported 00:09:52.926 Transport SGL Data Block: Not Supported 00:09:52.926 Replay Protected Memory Block: Not Supported 00:09:52.926 00:09:52.926 Firmware Slot Information 00:09:52.926 ========================= 00:09:52.926 Active slot: 1 00:09:52.926 Slot 1 Firmware Revision: 1.0 00:09:52.926 00:09:52.926 00:09:52.926 Commands Supported and Effects 00:09:52.926 ============================== 00:09:52.926 Admin Commands 00:09:52.926 -------------- 00:09:52.926 Delete I/O Submission Queue (00h): Supported 00:09:52.926 Create I/O Submission Queue (01h): Supported 00:09:52.926 Get Log Page (02h): Supported 00:09:52.926 Delete I/O Completion Queue (04h): Supported 00:09:52.926 Create I/O Completion Queue (05h): Supported 00:09:52.926 Identify (06h): Supported 00:09:52.926 Abort (08h): Supported 00:09:52.926 Set Features (09h): Supported 00:09:52.926 Get Features (0Ah): Supported 00:09:52.926 Asynchronous Event Request (0Ch): Supported 00:09:52.926 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.926 Directive Send (19h): Supported 00:09:52.926 Directive Receive (1Ah): Supported 00:09:52.926 Virtualization Management (1Ch): Supported 00:09:52.926 Doorbell Buffer Config (7Ch): Supported 00:09:52.926 Format NVM (80h): Supported LBA-Change 00:09:52.926 I/O Commands 00:09:52.926 ------------ 00:09:52.926 Flush (00h): Supported LBA-Change 00:09:52.926 Write (01h): Supported LBA-Change 00:09:52.926 Read (02h): Supported 00:09:52.926 Compare (05h): Supported 00:09:52.926 Write Zeroes (08h): Supported LBA-Change 00:09:52.926 Dataset Management (09h): Supported LBA-Change 00:09:52.926 Unknown (0Ch): Supported 00:09:52.926 Unknown (12h): Supported 00:09:52.926 Copy (19h): Supported LBA-Change 00:09:52.926 Unknown (1Dh): Supported LBA-Change 00:09:52.926 00:09:52.926 Error Log 00:09:52.926 ========= 00:09:52.926 00:09:52.926 Arbitration 00:09:52.926 =========== 00:09:52.926 Arbitration Burst: no limit 00:09:52.926 00:09:52.926 Power Management 00:09:52.926 ================ 00:09:52.926 Number of Power States: 1 00:09:52.926 Current Power State: Power State #0 00:09:52.926 Power State #0: 00:09:52.926 Max Power: 25.00 W 00:09:52.926 Non-Operational State: Operational 00:09:52.926 Entry Latency: 16 microseconds 00:09:52.926 Exit Latency: 4 microseconds 00:09:52.926 Relative Read Throughput: 0 00:09:52.926 Relative Read Latency: 0 00:09:52.926 Relative Write Throughput: 0 00:09:52.926 Relative Write Latency: 0 00:09:52.926 Idle Power: Not Reported 00:09:52.926 Active Power: Not Reported 00:09:52.926 Non-Operational Permissive Mode: Not Supported 00:09:52.926 00:09:52.926 Health Information 00:09:52.926 ================== 00:09:52.926 Critical Warnings: 00:09:52.926 Available Spare Space: OK 00:09:52.926 Temperature: OK 00:09:52.926 Device Reliability: OK 00:09:52.926 Read Only: No 00:09:52.926 Volatile Memory Backup: OK 00:09:52.926 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.926 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.926 Available Spare: 0% 00:09:52.926 Available Spare Threshold: 0% 00:09:52.926 Life Percentage Used: 0% 00:09:52.926 Data Units Read: 754 00:09:52.926 Data Units Written: 601 00:09:52.926 Host Read Commands: 35205 00:09:52.926 Host Write Commands: 32886 00:09:52.926 Controller Busy Time: 0 minutes 00:09:52.926 Power Cycles: 0 00:09:52.926 Power On Hours: 0 hours 00:09:52.926 Unsafe Shutdowns: 0 00:09:52.926 Unrecoverable Media Errors: 0 00:09:52.926 Lifetime Error Log Entries: 0 00:09:52.926 Warning Temperature Time: 0 minutes 00:09:52.926 Critical Temperature Time: 0 minutes 00:09:52.926 00:09:52.926 Number of Queues 00:09:52.926 ================ 00:09:52.926 Number of I/O Submission Queues: 64 00:09:52.926 Number of I/O Completion Queues: 64 00:09:52.926 00:09:52.926 ZNS Specific Controller Data 00:09:52.926 ============================ 00:09:52.926 Zone Append Size Limit: 0 00:09:52.926 00:09:52.926 00:09:52.926 Active Namespaces 00:09:52.926 ================= 00:09:52.926 Namespace ID:1 00:09:52.926 Error Recovery Timeout: Unlimited 00:09:52.926 Command Set Identifier: NVM (00h) 00:09:52.926 Deallocate: Supported 00:09:52.926 Deallocated/Unwritten Error: Supported 00:09:52.926 Deallocated Read Value: All 0x00 00:09:52.926 Deallocate in Write Zeroes: Not Supported 00:09:52.926 Deallocated Guard Field: 0xFFFF 00:09:52.926 Flush: Supported 00:09:52.926 Reservation: Not Supported 00:09:52.926 Namespace Sharing Capabilities: Private 00:09:52.926 Size (in LBAs): 1310720 (5GiB) 00:09:52.926 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.926 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.926 Thin Provisioning: Not Supported 00:09:52.926 Per-NS Atomic Units: No 00:09:52.926 Maximum Single Source Range Length: 128 00:09:52.926 Maximum Copy Length: 128 00:09:52.926 Maximum Source Range Count: 128 00:09:52.926 NGUID/EUI64 Never Reused: No 00:09:52.926 Namespace Write Protected: No 00:09:52.926 Number of LBA Formats: 8 00:09:52.926 Current LBA Format: LBA Format #04 00:09:52.926 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.926 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.926 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.926 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.926 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.926 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.926 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.926 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.926 00:09:52.926 NVM Specific Namespace Data 00:09:52.926 =========================== 00:09:52.926 Logical Block Storage Tag Mask: 0 00:09:52.926 Protection Information Capabilities: 00:09:52.926 16b Guard Protection Information Storage Tag Support: No 00:09:52.926 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:52.926 Storage Tag Check Read Support: No 00:09:52.926 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:52.926 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.926 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:53.184 ===================================================== 00:09:53.184 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.184 ===================================================== 00:09:53.184 Controller Capabilities/Features 00:09:53.184 ================================ 00:09:53.184 Vendor ID: 1b36 00:09:53.184 Subsystem Vendor ID: 1af4 00:09:53.184 Serial Number: 12342 00:09:53.184 Model Number: QEMU NVMe Ctrl 00:09:53.184 Firmware Version: 8.0.0 00:09:53.184 Recommended Arb Burst: 6 00:09:53.184 IEEE OUI Identifier: 00 54 52 00:09:53.184 Multi-path I/O 00:09:53.184 May have multiple subsystem ports: No 00:09:53.184 May have multiple controllers: No 00:09:53.184 Associated with SR-IOV VF: No 00:09:53.184 Max Data Transfer Size: 524288 00:09:53.184 Max Number of Namespaces: 256 00:09:53.184 Max Number of I/O Queues: 64 00:09:53.184 NVMe Specification Version (VS): 1.4 00:09:53.184 NVMe Specification Version (Identify): 1.4 00:09:53.184 Maximum Queue Entries: 2048 00:09:53.184 Contiguous Queues Required: Yes 00:09:53.184 Arbitration Mechanisms Supported 00:09:53.184 Weighted Round Robin: Not Supported 00:09:53.184 Vendor Specific: Not Supported 00:09:53.184 Reset Timeout: 7500 ms 00:09:53.184 Doorbell Stride: 4 bytes 00:09:53.184 NVM Subsystem Reset: Not Supported 00:09:53.184 Command Sets Supported 00:09:53.184 NVM Command Set: Supported 00:09:53.184 Boot Partition: Not Supported 00:09:53.184 Memory Page Size Minimum: 4096 bytes 00:09:53.184 Memory Page Size Maximum: 65536 bytes 00:09:53.184 Persistent Memory Region: Not Supported 00:09:53.184 Optional Asynchronous Events Supported 00:09:53.184 Namespace Attribute Notices: Supported 00:09:53.184 Firmware Activation Notices: Not Supported 00:09:53.184 ANA Change Notices: Not Supported 00:09:53.184 PLE Aggregate Log Change Notices: Not Supported 00:09:53.184 LBA Status Info Alert Notices: Not Supported 00:09:53.184 EGE Aggregate Log Change Notices: Not Supported 00:09:53.184 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.184 Zone Descriptor Change Notices: Not Supported 00:09:53.184 Discovery Log Change Notices: Not Supported 00:09:53.184 Controller Attributes 00:09:53.184 128-bit Host Identifier: Not Supported 00:09:53.184 Non-Operational Permissive Mode: Not Supported 00:09:53.184 NVM Sets: Not Supported 00:09:53.184 Read Recovery Levels: Not Supported 00:09:53.184 Endurance Groups: Not Supported 00:09:53.184 Predictable Latency Mode: Not Supported 00:09:53.184 Traffic Based Keep ALive: Not Supported 00:09:53.184 Namespace Granularity: Not Supported 00:09:53.184 SQ Associations: Not Supported 00:09:53.184 UUID List: Not Supported 00:09:53.184 Multi-Domain Subsystem: Not Supported 00:09:53.184 Fixed Capacity Management: Not Supported 00:09:53.184 Variable Capacity Management: Not Supported 00:09:53.184 Delete Endurance Group: Not Supported 00:09:53.184 Delete NVM Set: Not Supported 00:09:53.184 Extended LBA Formats Supported: Supported 00:09:53.184 Flexible Data Placement Supported: Not Supported 00:09:53.184 00:09:53.184 Controller Memory Buffer Support 00:09:53.184 ================================ 00:09:53.184 Supported: No 00:09:53.184 00:09:53.184 Persistent Memory Region Support 00:09:53.184 ================================ 00:09:53.184 Supported: No 00:09:53.184 00:09:53.184 Admin Command Set Attributes 00:09:53.184 ============================ 00:09:53.184 Security Send/Receive: Not Supported 00:09:53.184 Format NVM: Supported 00:09:53.184 Firmware Activate/Download: Not Supported 00:09:53.184 Namespace Management: Supported 00:09:53.184 Device Self-Test: Not Supported 00:09:53.184 Directives: Supported 00:09:53.184 NVMe-MI: Not Supported 00:09:53.184 Virtualization Management: Not Supported 00:09:53.184 Doorbell Buffer Config: Supported 00:09:53.184 Get LBA Status Capability: Not Supported 00:09:53.184 Command & Feature Lockdown Capability: Not Supported 00:09:53.184 Abort Command Limit: 4 00:09:53.184 Async Event Request Limit: 4 00:09:53.184 Number of Firmware Slots: N/A 00:09:53.184 Firmware Slot 1 Read-Only: N/A 00:09:53.184 Firmware Activation Without Reset: N/A 00:09:53.184 Multiple Update Detection Support: N/A 00:09:53.184 Firmware Update Granularity: No Information Provided 00:09:53.184 Per-Namespace SMART Log: Yes 00:09:53.184 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.184 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:53.184 Command Effects Log Page: Supported 00:09:53.184 Get Log Page Extended Data: Supported 00:09:53.184 Telemetry Log Pages: Not Supported 00:09:53.184 Persistent Event Log Pages: Not Supported 00:09:53.184 Supported Log Pages Log Page: May Support 00:09:53.184 Commands Supported & Effects Log Page: Not Supported 00:09:53.184 Feature Identifiers & Effects Log Page:May Support 00:09:53.184 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.184 Data Area 4 for Telemetry Log: Not Supported 00:09:53.184 Error Log Page Entries Supported: 1 00:09:53.184 Keep Alive: Not Supported 00:09:53.184 00:09:53.184 NVM Command Set Attributes 00:09:53.184 ========================== 00:09:53.184 Submission Queue Entry Size 00:09:53.184 Max: 64 00:09:53.184 Min: 64 00:09:53.184 Completion Queue Entry Size 00:09:53.184 Max: 16 00:09:53.184 Min: 16 00:09:53.184 Number of Namespaces: 256 00:09:53.184 Compare Command: Supported 00:09:53.184 Write Uncorrectable Command: Not Supported 00:09:53.184 Dataset Management Command: Supported 00:09:53.184 Write Zeroes Command: Supported 00:09:53.184 Set Features Save Field: Supported 00:09:53.185 Reservations: Not Supported 00:09:53.185 Timestamp: Supported 00:09:53.185 Copy: Supported 00:09:53.185 Volatile Write Cache: Present 00:09:53.185 Atomic Write Unit (Normal): 1 00:09:53.185 Atomic Write Unit (PFail): 1 00:09:53.185 Atomic Compare & Write Unit: 1 00:09:53.185 Fused Compare & Write: Not Supported 00:09:53.185 Scatter-Gather List 00:09:53.185 SGL Command Set: Supported 00:09:53.185 SGL Keyed: Not Supported 00:09:53.185 SGL Bit Bucket Descriptor: Not Supported 00:09:53.185 SGL Metadata Pointer: Not Supported 00:09:53.185 Oversized SGL: Not Supported 00:09:53.185 SGL Metadata Address: Not Supported 00:09:53.185 SGL Offset: Not Supported 00:09:53.185 Transport SGL Data Block: Not Supported 00:09:53.185 Replay Protected Memory Block: Not Supported 00:09:53.185 00:09:53.185 Firmware Slot Information 00:09:53.185 ========================= 00:09:53.185 Active slot: 1 00:09:53.185 Slot 1 Firmware Revision: 1.0 00:09:53.185 00:09:53.185 00:09:53.185 Commands Supported and Effects 00:09:53.185 ============================== 00:09:53.185 Admin Commands 00:09:53.185 -------------- 00:09:53.185 Delete I/O Submission Queue (00h): Supported 00:09:53.185 Create I/O Submission Queue (01h): Supported 00:09:53.185 Get Log Page (02h): Supported 00:09:53.185 Delete I/O Completion Queue (04h): Supported 00:09:53.185 Create I/O Completion Queue (05h): Supported 00:09:53.185 Identify (06h): Supported 00:09:53.185 Abort (08h): Supported 00:09:53.185 Set Features (09h): Supported 00:09:53.185 Get Features (0Ah): Supported 00:09:53.185 Asynchronous Event Request (0Ch): Supported 00:09:53.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.185 Directive Send (19h): Supported 00:09:53.185 Directive Receive (1Ah): Supported 00:09:53.185 Virtualization Management (1Ch): Supported 00:09:53.185 Doorbell Buffer Config (7Ch): Supported 00:09:53.185 Format NVM (80h): Supported LBA-Change 00:09:53.185 I/O Commands 00:09:53.185 ------------ 00:09:53.185 Flush (00h): Supported LBA-Change 00:09:53.185 Write (01h): Supported LBA-Change 00:09:53.185 Read (02h): Supported 00:09:53.185 Compare (05h): Supported 00:09:53.185 Write Zeroes (08h): Supported LBA-Change 00:09:53.185 Dataset Management (09h): Supported LBA-Change 00:09:53.185 Unknown (0Ch): Supported 00:09:53.185 Unknown (12h): Supported 00:09:53.185 Copy (19h): Supported LBA-Change 00:09:53.185 Unknown (1Dh): Supported LBA-Change 00:09:53.185 00:09:53.185 Error Log 00:09:53.185 ========= 00:09:53.185 00:09:53.185 Arbitration 00:09:53.185 =========== 00:09:53.185 Arbitration Burst: no limit 00:09:53.185 00:09:53.185 Power Management 00:09:53.185 ================ 00:09:53.185 Number of Power States: 1 00:09:53.185 Current Power State: Power State #0 00:09:53.185 Power State #0: 00:09:53.185 Max Power: 25.00 W 00:09:53.185 Non-Operational State: Operational 00:09:53.185 Entry Latency: 16 microseconds 00:09:53.185 Exit Latency: 4 microseconds 00:09:53.185 Relative Read Throughput: 0 00:09:53.185 Relative Read Latency: 0 00:09:53.185 Relative Write Throughput: 0 00:09:53.185 Relative Write Latency: 0 00:09:53.185 Idle Power: Not Reported 00:09:53.185 Active Power: Not Reported 00:09:53.185 Non-Operational Permissive Mode: Not Supported 00:09:53.185 00:09:53.185 Health Information 00:09:53.185 ================== 00:09:53.185 Critical Warnings: 00:09:53.185 Available Spare Space: OK 00:09:53.185 Temperature: OK 00:09:53.185 Device Reliability: OK 00:09:53.185 Read Only: No 00:09:53.185 Volatile Memory Backup: OK 00:09:53.185 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.185 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.185 Available Spare: 0% 00:09:53.185 Available Spare Threshold: 0% 00:09:53.185 Life Percentage Used: 0% 00:09:53.185 Data Units Read: 2201 00:09:53.185 Data Units Written: 1882 00:09:53.185 Host Read Commands: 103522 00:09:53.185 Host Write Commands: 99292 00:09:53.185 Controller Busy Time: 0 minutes 00:09:53.185 Power Cycles: 0 00:09:53.185 Power On Hours: 0 hours 00:09:53.185 Unsafe Shutdowns: 0 00:09:53.185 Unrecoverable Media Errors: 0 00:09:53.185 Lifetime Error Log Entries: 0 00:09:53.185 Warning Temperature Time: 0 minutes 00:09:53.185 Critical Temperature Time: 0 minutes 00:09:53.185 00:09:53.185 Number of Queues 00:09:53.185 ================ 00:09:53.185 Number of I/O Submission Queues: 64 00:09:53.185 Number of I/O Completion Queues: 64 00:09:53.185 00:09:53.185 ZNS Specific Controller Data 00:09:53.185 ============================ 00:09:53.185 Zone Append Size Limit: 0 00:09:53.185 00:09:53.185 00:09:53.185 Active Namespaces 00:09:53.185 ================= 00:09:53.185 Namespace ID:1 00:09:53.185 Error Recovery Timeout: Unlimited 00:09:53.185 Command Set Identifier: NVM (00h) 00:09:53.185 Deallocate: Supported 00:09:53.185 Deallocated/Unwritten Error: Supported 00:09:53.185 Deallocated Read Value: All 0x00 00:09:53.185 Deallocate in Write Zeroes: Not Supported 00:09:53.185 Deallocated Guard Field: 0xFFFF 00:09:53.185 Flush: Supported 00:09:53.185 Reservation: Not Supported 00:09:53.185 Namespace Sharing Capabilities: Private 00:09:53.185 Size (in LBAs): 1048576 (4GiB) 00:09:53.185 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.185 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.185 Thin Provisioning: Not Supported 00:09:53.185 Per-NS Atomic Units: No 00:09:53.185 Maximum Single Source Range Length: 128 00:09:53.185 Maximum Copy Length: 128 00:09:53.185 Maximum Source Range Count: 128 00:09:53.185 NGUID/EUI64 Never Reused: No 00:09:53.185 Namespace Write Protected: No 00:09:53.185 Number of LBA Formats: 8 00:09:53.185 Current LBA Format: LBA Format #04 00:09:53.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.185 00:09:53.185 NVM Specific Namespace Data 00:09:53.185 =========================== 00:09:53.185 Logical Block Storage Tag Mask: 0 00:09:53.185 Protection Information Capabilities: 00:09:53.185 16b Guard Protection Information Storage Tag Support: No 00:09:53.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.185 Storage Tag Check Read Support: No 00:09:53.185 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.185 Namespace ID:2 00:09:53.185 Error Recovery Timeout: Unlimited 00:09:53.185 Command Set Identifier: NVM (00h) 00:09:53.185 Deallocate: Supported 00:09:53.185 Deallocated/Unwritten Error: Supported 00:09:53.185 Deallocated Read Value: All 0x00 00:09:53.185 Deallocate in Write Zeroes: Not Supported 00:09:53.185 Deallocated Guard Field: 0xFFFF 00:09:53.185 Flush: Supported 00:09:53.185 Reservation: Not Supported 00:09:53.185 Namespace Sharing Capabilities: Private 00:09:53.185 Size (in LBAs): 1048576 (4GiB) 00:09:53.185 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.185 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.185 Thin Provisioning: Not Supported 00:09:53.185 Per-NS Atomic Units: No 00:09:53.185 Maximum Single Source Range Length: 128 00:09:53.185 Maximum Copy Length: 128 00:09:53.185 Maximum Source Range Count: 128 00:09:53.185 NGUID/EUI64 Never Reused: No 00:09:53.185 Namespace Write Protected: No 00:09:53.185 Number of LBA Formats: 8 00:09:53.185 Current LBA Format: LBA Format #04 00:09:53.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.185 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.185 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.185 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.185 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.185 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.185 00:09:53.185 NVM Specific Namespace Data 00:09:53.185 =========================== 00:09:53.185 Logical Block Storage Tag Mask: 0 00:09:53.185 Protection Information Capabilities: 00:09:53.185 16b Guard Protection Information Storage Tag Support: No 00:09:53.185 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.185 Storage Tag Check Read Support: No 00:09:53.185 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Namespace ID:3 00:09:53.186 Error Recovery Timeout: Unlimited 00:09:53.186 Command Set Identifier: NVM (00h) 00:09:53.186 Deallocate: Supported 00:09:53.186 Deallocated/Unwritten Error: Supported 00:09:53.186 Deallocated Read Value: All 0x00 00:09:53.186 Deallocate in Write Zeroes: Not Supported 00:09:53.186 Deallocated Guard Field: 0xFFFF 00:09:53.186 Flush: Supported 00:09:53.186 Reservation: Not Supported 00:09:53.186 Namespace Sharing Capabilities: Private 00:09:53.186 Size (in LBAs): 1048576 (4GiB) 00:09:53.186 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.186 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.186 Thin Provisioning: Not Supported 00:09:53.186 Per-NS Atomic Units: No 00:09:53.186 Maximum Single Source Range Length: 128 00:09:53.186 Maximum Copy Length: 128 00:09:53.186 Maximum Source Range Count: 128 00:09:53.186 NGUID/EUI64 Never Reused: No 00:09:53.186 Namespace Write Protected: No 00:09:53.186 Number of LBA Formats: 8 00:09:53.186 Current LBA Format: LBA Format #04 00:09:53.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.186 00:09:53.186 NVM Specific Namespace Data 00:09:53.186 =========================== 00:09:53.186 Logical Block Storage Tag Mask: 0 00:09:53.186 Protection Information Capabilities: 00:09:53.186 16b Guard Protection Information Storage Tag Support: No 00:09:53.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.186 Storage Tag Check Read Support: No 00:09:53.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.186 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:53.186 17:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:53.510 ===================================================== 00:09:53.510 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.510 ===================================================== 00:09:53.510 Controller Capabilities/Features 00:09:53.510 ================================ 00:09:53.510 Vendor ID: 1b36 00:09:53.510 Subsystem Vendor ID: 1af4 00:09:53.510 Serial Number: 12343 00:09:53.510 Model Number: QEMU NVMe Ctrl 00:09:53.510 Firmware Version: 8.0.0 00:09:53.510 Recommended Arb Burst: 6 00:09:53.510 IEEE OUI Identifier: 00 54 52 00:09:53.510 Multi-path I/O 00:09:53.510 May have multiple subsystem ports: No 00:09:53.510 May have multiple controllers: Yes 00:09:53.510 Associated with SR-IOV VF: No 00:09:53.510 Max Data Transfer Size: 524288 00:09:53.510 Max Number of Namespaces: 256 00:09:53.510 Max Number of I/O Queues: 64 00:09:53.510 NVMe Specification Version (VS): 1.4 00:09:53.510 NVMe Specification Version (Identify): 1.4 00:09:53.510 Maximum Queue Entries: 2048 00:09:53.510 Contiguous Queues Required: Yes 00:09:53.510 Arbitration Mechanisms Supported 00:09:53.510 Weighted Round Robin: Not Supported 00:09:53.510 Vendor Specific: Not Supported 00:09:53.510 Reset Timeout: 7500 ms 00:09:53.510 Doorbell Stride: 4 bytes 00:09:53.510 NVM Subsystem Reset: Not Supported 00:09:53.510 Command Sets Supported 00:09:53.510 NVM Command Set: Supported 00:09:53.510 Boot Partition: Not Supported 00:09:53.510 Memory Page Size Minimum: 4096 bytes 00:09:53.510 Memory Page Size Maximum: 65536 bytes 00:09:53.510 Persistent Memory Region: Not Supported 00:09:53.510 Optional Asynchronous Events Supported 00:09:53.510 Namespace Attribute Notices: Supported 00:09:53.510 Firmware Activation Notices: Not Supported 00:09:53.510 ANA Change Notices: Not Supported 00:09:53.510 PLE Aggregate Log Change Notices: Not Supported 00:09:53.510 LBA Status Info Alert Notices: Not Supported 00:09:53.510 EGE Aggregate Log Change Notices: Not Supported 00:09:53.510 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.510 Zone Descriptor Change Notices: Not Supported 00:09:53.510 Discovery Log Change Notices: Not Supported 00:09:53.510 Controller Attributes 00:09:53.510 128-bit Host Identifier: Not Supported 00:09:53.510 Non-Operational Permissive Mode: Not Supported 00:09:53.510 NVM Sets: Not Supported 00:09:53.510 Read Recovery Levels: Not Supported 00:09:53.510 Endurance Groups: Supported 00:09:53.510 Predictable Latency Mode: Not Supported 00:09:53.510 Traffic Based Keep ALive: Not Supported 00:09:53.510 Namespace Granularity: Not Supported 00:09:53.511 SQ Associations: Not Supported 00:09:53.511 UUID List: Not Supported 00:09:53.511 Multi-Domain Subsystem: Not Supported 00:09:53.511 Fixed Capacity Management: Not Supported 00:09:53.511 Variable Capacity Management: Not Supported 00:09:53.511 Delete Endurance Group: Not Supported 00:09:53.511 Delete NVM Set: Not Supported 00:09:53.511 Extended LBA Formats Supported: Supported 00:09:53.511 Flexible Data Placement Supported: Supported 00:09:53.511 00:09:53.511 Controller Memory Buffer Support 00:09:53.511 ================================ 00:09:53.511 Supported: No 00:09:53.511 00:09:53.511 Persistent Memory Region Support 00:09:53.511 ================================ 00:09:53.511 Supported: No 00:09:53.511 00:09:53.511 Admin Command Set Attributes 00:09:53.511 ============================ 00:09:53.511 Security Send/Receive: Not Supported 00:09:53.511 Format NVM: Supported 00:09:53.511 Firmware Activate/Download: Not Supported 00:09:53.511 Namespace Management: Supported 00:09:53.511 Device Self-Test: Not Supported 00:09:53.511 Directives: Supported 00:09:53.511 NVMe-MI: Not Supported 00:09:53.511 Virtualization Management: Not Supported 00:09:53.511 Doorbell Buffer Config: Supported 00:09:53.511 Get LBA Status Capability: Not Supported 00:09:53.511 Command & Feature Lockdown Capability: Not Supported 00:09:53.511 Abort Command Limit: 4 00:09:53.511 Async Event Request Limit: 4 00:09:53.511 Number of Firmware Slots: N/A 00:09:53.511 Firmware Slot 1 Read-Only: N/A 00:09:53.511 Firmware Activation Without Reset: N/A 00:09:53.511 Multiple Update Detection Support: N/A 00:09:53.511 Firmware Update Granularity: No Information Provided 00:09:53.511 Per-Namespace SMART Log: Yes 00:09:53.511 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.511 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:53.511 Command Effects Log Page: Supported 00:09:53.511 Get Log Page Extended Data: Supported 00:09:53.511 Telemetry Log Pages: Not Supported 00:09:53.511 Persistent Event Log Pages: Not Supported 00:09:53.511 Supported Log Pages Log Page: May Support 00:09:53.511 Commands Supported & Effects Log Page: Not Supported 00:09:53.511 Feature Identifiers & Effects Log Page:May Support 00:09:53.511 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.511 Data Area 4 for Telemetry Log: Not Supported 00:09:53.511 Error Log Page Entries Supported: 1 00:09:53.511 Keep Alive: Not Supported 00:09:53.511 00:09:53.511 NVM Command Set Attributes 00:09:53.511 ========================== 00:09:53.511 Submission Queue Entry Size 00:09:53.511 Max: 64 00:09:53.511 Min: 64 00:09:53.511 Completion Queue Entry Size 00:09:53.511 Max: 16 00:09:53.511 Min: 16 00:09:53.511 Number of Namespaces: 256 00:09:53.511 Compare Command: Supported 00:09:53.511 Write Uncorrectable Command: Not Supported 00:09:53.511 Dataset Management Command: Supported 00:09:53.511 Write Zeroes Command: Supported 00:09:53.511 Set Features Save Field: Supported 00:09:53.511 Reservations: Not Supported 00:09:53.511 Timestamp: Supported 00:09:53.511 Copy: Supported 00:09:53.511 Volatile Write Cache: Present 00:09:53.511 Atomic Write Unit (Normal): 1 00:09:53.511 Atomic Write Unit (PFail): 1 00:09:53.511 Atomic Compare & Write Unit: 1 00:09:53.511 Fused Compare & Write: Not Supported 00:09:53.511 Scatter-Gather List 00:09:53.511 SGL Command Set: Supported 00:09:53.511 SGL Keyed: Not Supported 00:09:53.511 SGL Bit Bucket Descriptor: Not Supported 00:09:53.511 SGL Metadata Pointer: Not Supported 00:09:53.511 Oversized SGL: Not Supported 00:09:53.511 SGL Metadata Address: Not Supported 00:09:53.511 SGL Offset: Not Supported 00:09:53.511 Transport SGL Data Block: Not Supported 00:09:53.511 Replay Protected Memory Block: Not Supported 00:09:53.511 00:09:53.511 Firmware Slot Information 00:09:53.511 ========================= 00:09:53.511 Active slot: 1 00:09:53.511 Slot 1 Firmware Revision: 1.0 00:09:53.511 00:09:53.511 00:09:53.511 Commands Supported and Effects 00:09:53.511 ============================== 00:09:53.511 Admin Commands 00:09:53.511 -------------- 00:09:53.511 Delete I/O Submission Queue (00h): Supported 00:09:53.511 Create I/O Submission Queue (01h): Supported 00:09:53.511 Get Log Page (02h): Supported 00:09:53.511 Delete I/O Completion Queue (04h): Supported 00:09:53.511 Create I/O Completion Queue (05h): Supported 00:09:53.511 Identify (06h): Supported 00:09:53.511 Abort (08h): Supported 00:09:53.511 Set Features (09h): Supported 00:09:53.511 Get Features (0Ah): Supported 00:09:53.511 Asynchronous Event Request (0Ch): Supported 00:09:53.511 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.511 Directive Send (19h): Supported 00:09:53.511 Directive Receive (1Ah): Supported 00:09:53.511 Virtualization Management (1Ch): Supported 00:09:53.511 Doorbell Buffer Config (7Ch): Supported 00:09:53.511 Format NVM (80h): Supported LBA-Change 00:09:53.511 I/O Commands 00:09:53.511 ------------ 00:09:53.511 Flush (00h): Supported LBA-Change 00:09:53.511 Write (01h): Supported LBA-Change 00:09:53.511 Read (02h): Supported 00:09:53.511 Compare (05h): Supported 00:09:53.511 Write Zeroes (08h): Supported LBA-Change 00:09:53.511 Dataset Management (09h): Supported LBA-Change 00:09:53.511 Unknown (0Ch): Supported 00:09:53.511 Unknown (12h): Supported 00:09:53.511 Copy (19h): Supported LBA-Change 00:09:53.511 Unknown (1Dh): Supported LBA-Change 00:09:53.511 00:09:53.511 Error Log 00:09:53.511 ========= 00:09:53.511 00:09:53.511 Arbitration 00:09:53.511 =========== 00:09:53.511 Arbitration Burst: no limit 00:09:53.511 00:09:53.511 Power Management 00:09:53.511 ================ 00:09:53.511 Number of Power States: 1 00:09:53.511 Current Power State: Power State #0 00:09:53.511 Power State #0: 00:09:53.511 Max Power: 25.00 W 00:09:53.511 Non-Operational State: Operational 00:09:53.511 Entry Latency: 16 microseconds 00:09:53.511 Exit Latency: 4 microseconds 00:09:53.511 Relative Read Throughput: 0 00:09:53.511 Relative Read Latency: 0 00:09:53.511 Relative Write Throughput: 0 00:09:53.511 Relative Write Latency: 0 00:09:53.511 Idle Power: Not Reported 00:09:53.511 Active Power: Not Reported 00:09:53.511 Non-Operational Permissive Mode: Not Supported 00:09:53.511 00:09:53.511 Health Information 00:09:53.511 ================== 00:09:53.511 Critical Warnings: 00:09:53.511 Available Spare Space: OK 00:09:53.511 Temperature: OK 00:09:53.511 Device Reliability: OK 00:09:53.511 Read Only: No 00:09:53.511 Volatile Memory Backup: OK 00:09:53.511 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.511 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.511 Available Spare: 0% 00:09:53.511 Available Spare Threshold: 0% 00:09:53.511 Life Percentage Used: 0% 00:09:53.511 Data Units Read: 797 00:09:53.511 Data Units Written: 690 00:09:53.511 Host Read Commands: 34996 00:09:53.511 Host Write Commands: 33586 00:09:53.511 Controller Busy Time: 0 minutes 00:09:53.511 Power Cycles: 0 00:09:53.511 Power On Hours: 0 hours 00:09:53.511 Unsafe Shutdowns: 0 00:09:53.511 Unrecoverable Media Errors: 0 00:09:53.511 Lifetime Error Log Entries: 0 00:09:53.511 Warning Temperature Time: 0 minutes 00:09:53.511 Critical Temperature Time: 0 minutes 00:09:53.511 00:09:53.511 Number of Queues 00:09:53.511 ================ 00:09:53.511 Number of I/O Submission Queues: 64 00:09:53.511 Number of I/O Completion Queues: 64 00:09:53.511 00:09:53.511 ZNS Specific Controller Data 00:09:53.511 ============================ 00:09:53.511 Zone Append Size Limit: 0 00:09:53.511 00:09:53.511 00:09:53.511 Active Namespaces 00:09:53.511 ================= 00:09:53.511 Namespace ID:1 00:09:53.511 Error Recovery Timeout: Unlimited 00:09:53.511 Command Set Identifier: NVM (00h) 00:09:53.512 Deallocate: Supported 00:09:53.512 Deallocated/Unwritten Error: Supported 00:09:53.512 Deallocated Read Value: All 0x00 00:09:53.512 Deallocate in Write Zeroes: Not Supported 00:09:53.512 Deallocated Guard Field: 0xFFFF 00:09:53.512 Flush: Supported 00:09:53.512 Reservation: Not Supported 00:09:53.512 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.512 Size (in LBAs): 262144 (1GiB) 00:09:53.512 Capacity (in LBAs): 262144 (1GiB) 00:09:53.512 Utilization (in LBAs): 262144 (1GiB) 00:09:53.512 Thin Provisioning: Not Supported 00:09:53.512 Per-NS Atomic Units: No 00:09:53.512 Maximum Single Source Range Length: 128 00:09:53.512 Maximum Copy Length: 128 00:09:53.512 Maximum Source Range Count: 128 00:09:53.512 NGUID/EUI64 Never Reused: No 00:09:53.512 Namespace Write Protected: No 00:09:53.512 Endurance group ID: 1 00:09:53.512 Number of LBA Formats: 8 00:09:53.512 Current LBA Format: LBA Format #04 00:09:53.512 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.512 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.512 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.512 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.512 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.512 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.512 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.512 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.512 00:09:53.512 Get Feature FDP: 00:09:53.512 ================ 00:09:53.512 Enabled: Yes 00:09:53.512 FDP configuration index: 0 00:09:53.512 00:09:53.512 FDP configurations log page 00:09:53.512 =========================== 00:09:53.512 Number of FDP configurations: 1 00:09:53.512 Version: 0 00:09:53.512 Size: 112 00:09:53.512 FDP Configuration Descriptor: 0 00:09:53.512 Descriptor Size: 96 00:09:53.512 Reclaim Group Identifier format: 2 00:09:53.512 FDP Volatile Write Cache: Not Present 00:09:53.512 FDP Configuration: Valid 00:09:53.512 Vendor Specific Size: 0 00:09:53.512 Number of Reclaim Groups: 2 00:09:53.512 Number of Recalim Unit Handles: 8 00:09:53.512 Max Placement Identifiers: 128 00:09:53.512 Number of Namespaces Suppprted: 256 00:09:53.512 Reclaim unit Nominal Size: 6000000 bytes 00:09:53.512 Estimated Reclaim Unit Time Limit: Not Reported 00:09:53.512 RUH Desc #000: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #001: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #002: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #003: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #004: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #005: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #006: RUH Type: Initially Isolated 00:09:53.512 RUH Desc #007: RUH Type: Initially Isolated 00:09:53.512 00:09:53.512 FDP reclaim unit handle usage log page 00:09:53.512 ====================================== 00:09:53.512 Number of Reclaim Unit Handles: 8 00:09:53.512 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:53.512 RUH Usage Desc #001: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #002: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #003: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #004: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #005: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #006: RUH Attributes: Unused 00:09:53.512 RUH Usage Desc #007: RUH Attributes: Unused 00:09:53.512 00:09:53.512 FDP statistics log page 00:09:53.512 ======================= 00:09:53.512 Host bytes with metadata written: 427532288 00:09:53.512 Media bytes with metadata written: 427577344 00:09:53.512 Media bytes erased: 0 00:09:53.512 00:09:53.512 FDP events log page 00:09:53.512 =================== 00:09:53.512 Number of FDP events: 0 00:09:53.512 00:09:53.512 NVM Specific Namespace Data 00:09:53.512 =========================== 00:09:53.512 Logical Block Storage Tag Mask: 0 00:09:53.512 Protection Information Capabilities: 00:09:53.512 16b Guard Protection Information Storage Tag Support: No 00:09:53.512 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:53.512 Storage Tag Check Read Support: No 00:09:53.512 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:53.512 00:09:53.512 real 0m1.609s 00:09:53.512 user 0m0.634s 00:09:53.512 sys 0m0.776s 00:09:53.512 17:16:04 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.512 17:16:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:53.512 ************************************ 00:09:53.512 END TEST nvme_identify 00:09:53.512 ************************************ 00:09:53.512 17:16:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:53.512 17:16:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:53.512 17:16:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:53.512 17:16:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.512 17:16:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.512 ************************************ 00:09:53.512 START TEST nvme_perf 00:09:53.512 ************************************ 00:09:53.512 17:16:04 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:09:53.512 17:16:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:54.933 Initializing NVMe Controllers 00:09:54.933 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.933 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.933 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.934 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.934 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:54.934 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:54.934 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:54.934 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:54.934 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:54.934 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:54.934 Initialization complete. Launching workers. 00:09:54.934 ======================================================== 00:09:54.934 Latency(us) 00:09:54.934 Device Information : IOPS MiB/s Average min max 00:09:54.934 PCIE (0000:00:10.0) NSID 1 from core 0: 11437.08 134.03 11200.47 8094.60 35057.51 00:09:54.934 PCIE (0000:00:11.0) NSID 1 from core 0: 11437.08 134.03 11188.50 8071.38 34154.99 00:09:54.934 PCIE (0000:00:13.0) NSID 1 from core 0: 11437.08 134.03 11172.99 7036.83 33792.70 00:09:54.934 PCIE (0000:00:12.0) NSID 1 from core 0: 11437.08 134.03 11157.08 6631.48 32861.23 00:09:54.934 PCIE (0000:00:12.0) NSID 2 from core 0: 11437.08 134.03 11140.92 6279.51 31837.84 00:09:54.934 PCIE (0000:00:12.0) NSID 3 from core 0: 11437.08 134.03 11125.06 5875.30 30576.14 00:09:54.934 ======================================================== 00:09:54.934 Total : 68622.50 804.17 11164.17 5875.30 35057.51 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8698.415us 00:09:54.934 10.00000% : 9532.509us 00:09:54.934 25.00000% : 10009.135us 00:09:54.934 50.00000% : 10724.073us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13166.778us 00:09:54.934 95.00000% : 14000.873us 00:09:54.934 98.00000% : 15728.640us 00:09:54.934 99.00000% : 26095.244us 00:09:54.934 99.50000% : 33363.782us 00:09:54.934 99.90000% : 34793.658us 00:09:54.934 99.99000% : 35031.971us 00:09:54.934 99.99900% : 35270.284us 00:09:54.934 99.99990% : 35270.284us 00:09:54.934 99.99999% : 35270.284us 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8757.993us 00:09:54.934 10.00000% : 9592.087us 00:09:54.934 25.00000% : 10068.713us 00:09:54.934 50.00000% : 10664.495us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13166.778us 00:09:54.934 95.00000% : 14000.873us 00:09:54.934 98.00000% : 15847.796us 00:09:54.934 99.00000% : 25618.618us 00:09:54.934 99.50000% : 32648.844us 00:09:54.934 99.90000% : 34078.720us 00:09:54.934 99.99000% : 34317.033us 00:09:54.934 99.99900% : 34317.033us 00:09:54.934 99.99990% : 34317.033us 00:09:54.934 99.99999% : 34317.033us 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8698.415us 00:09:54.934 10.00000% : 9592.087us 00:09:54.934 25.00000% : 10068.713us 00:09:54.934 50.00000% : 10664.495us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13226.356us 00:09:54.934 95.00000% : 13941.295us 00:09:54.934 98.00000% : 15609.484us 00:09:54.934 99.00000% : 24903.680us 00:09:54.934 99.50000% : 32172.218us 00:09:54.934 99.90000% : 33602.095us 00:09:54.934 99.99000% : 33840.407us 00:09:54.934 99.99900% : 33840.407us 00:09:54.934 99.99990% : 33840.407us 00:09:54.934 99.99999% : 33840.407us 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8698.415us 00:09:54.934 10.00000% : 9592.087us 00:09:54.934 25.00000% : 10068.713us 00:09:54.934 50.00000% : 10664.495us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13226.356us 00:09:54.934 95.00000% : 13881.716us 00:09:54.934 98.00000% : 15371.171us 00:09:54.934 99.00000% : 24069.585us 00:09:54.934 99.50000% : 31457.280us 00:09:54.934 99.90000% : 32648.844us 00:09:54.934 99.99000% : 32887.156us 00:09:54.934 99.99900% : 32887.156us 00:09:54.934 99.99990% : 32887.156us 00:09:54.934 99.99999% : 32887.156us 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8698.415us 00:09:54.934 10.00000% : 9592.087us 00:09:54.934 25.00000% : 10068.713us 00:09:54.934 50.00000% : 10664.495us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13226.356us 00:09:54.934 95.00000% : 13941.295us 00:09:54.934 98.00000% : 15371.171us 00:09:54.934 99.00000% : 23116.335us 00:09:54.934 99.50000% : 30265.716us 00:09:54.934 99.90000% : 31695.593us 00:09:54.934 99.99000% : 31933.905us 00:09:54.934 99.99900% : 31933.905us 00:09:54.934 99.99990% : 31933.905us 00:09:54.934 99.99999% : 31933.905us 00:09:54.934 00:09:54.934 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:54.934 ================================================================================= 00:09:54.934 1.00000% : 8638.836us 00:09:54.934 10.00000% : 9592.087us 00:09:54.934 25.00000% : 10068.713us 00:09:54.934 50.00000% : 10664.495us 00:09:54.934 75.00000% : 11736.902us 00:09:54.934 90.00000% : 13166.778us 00:09:54.934 95.00000% : 14060.451us 00:09:54.934 98.00000% : 15371.171us 00:09:54.934 99.00000% : 22163.084us 00:09:54.934 99.50000% : 29312.465us 00:09:54.934 99.90000% : 30384.873us 00:09:54.934 99.99000% : 30742.342us 00:09:54.934 99.99900% : 30742.342us 00:09:54.934 99.99990% : 30742.342us 00:09:54.934 99.99999% : 30742.342us 00:09:54.934 00:09:54.934 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:54.934 ============================================================================== 00:09:54.934 Range in us Cumulative IO count 00:09:54.934 8043.055 - 8102.633: 0.0087% ( 1) 00:09:54.934 8102.633 - 8162.211: 0.0436% ( 4) 00:09:54.934 8162.211 - 8221.789: 0.0698% ( 3) 00:09:54.934 8221.789 - 8281.367: 0.0873% ( 2) 00:09:54.934 8281.367 - 8340.945: 0.1571% ( 8) 00:09:54.934 8340.945 - 8400.524: 0.2706% ( 13) 00:09:54.934 8400.524 - 8460.102: 0.3841% ( 13) 00:09:54.934 8460.102 - 8519.680: 0.5499% ( 19) 00:09:54.934 8519.680 - 8579.258: 0.7507% ( 23) 00:09:54.934 8579.258 - 8638.836: 0.9951% ( 28) 00:09:54.934 8638.836 - 8698.415: 1.2919% ( 34) 00:09:54.934 8698.415 - 8757.993: 1.5625% ( 31) 00:09:54.934 8757.993 - 8817.571: 1.9029% ( 39) 00:09:54.934 8817.571 - 8877.149: 2.2783% ( 43) 00:09:54.934 8877.149 - 8936.727: 2.7235% ( 51) 00:09:54.934 8936.727 - 8996.305: 3.1599% ( 50) 00:09:54.934 8996.305 - 9055.884: 3.6487% ( 56) 00:09:54.934 9055.884 - 9115.462: 4.1638% ( 59) 00:09:54.934 9115.462 - 9175.040: 4.7137% ( 63) 00:09:54.934 9175.040 - 9234.618: 5.3422% ( 72) 00:09:54.934 9234.618 - 9294.196: 6.0667% ( 83) 00:09:54.934 9294.196 - 9353.775: 6.9658% ( 103) 00:09:54.934 9353.775 - 9413.353: 7.8649% ( 103) 00:09:54.934 9413.353 - 9472.931: 8.9473% ( 124) 00:09:54.934 9472.931 - 9532.509: 10.3265% ( 158) 00:09:54.934 9532.509 - 9592.087: 11.6358% ( 150) 00:09:54.934 9592.087 - 9651.665: 13.3293% ( 194) 00:09:54.934 9651.665 - 9711.244: 15.1973% ( 214) 00:09:54.934 9711.244 - 9770.822: 16.9780% ( 204) 00:09:54.934 9770.822 - 9830.400: 19.0293% ( 235) 00:09:54.934 9830.400 - 9889.978: 21.0196% ( 228) 00:09:54.934 9889.978 - 9949.556: 23.1320% ( 242) 00:09:54.934 9949.556 - 10009.135: 25.3579% ( 255) 00:09:54.934 10009.135 - 10068.713: 27.4529% ( 240) 00:09:54.934 10068.713 - 10128.291: 29.5915% ( 245) 00:09:54.934 10128.291 - 10187.869: 31.7999% ( 253) 00:09:54.934 10187.869 - 10247.447: 33.9036% ( 241) 00:09:54.934 10247.447 - 10307.025: 36.1208% ( 254) 00:09:54.934 10307.025 - 10366.604: 38.3554% ( 256) 00:09:54.934 10366.604 - 10426.182: 40.6075% ( 258) 00:09:54.934 10426.182 - 10485.760: 42.8422% ( 256) 00:09:54.934 10485.760 - 10545.338: 45.1554% ( 265) 00:09:54.934 10545.338 - 10604.916: 47.3202% ( 248) 00:09:54.934 10604.916 - 10664.495: 49.6334% ( 265) 00:09:54.934 10664.495 - 10724.073: 51.7545% ( 243) 00:09:54.934 10724.073 - 10783.651: 53.9717% ( 254) 00:09:54.934 10783.651 - 10843.229: 55.9619% ( 228) 00:09:54.934 10843.229 - 10902.807: 57.9347% ( 226) 00:09:54.934 10902.807 - 10962.385: 59.8987% ( 225) 00:09:54.934 10962.385 - 11021.964: 61.5398% ( 188) 00:09:54.934 11021.964 - 11081.542: 63.2856% ( 200) 00:09:54.934 11081.542 - 11141.120: 64.9529% ( 191) 00:09:54.934 11141.120 - 11200.698: 66.3408% ( 159) 00:09:54.934 11200.698 - 11260.276: 67.6589% ( 151) 00:09:54.934 11260.276 - 11319.855: 68.7064% ( 120) 00:09:54.934 11319.855 - 11379.433: 69.8237% ( 128) 00:09:54.934 11379.433 - 11439.011: 70.9584% ( 130) 00:09:54.934 11439.011 - 11498.589: 71.8139% ( 98) 00:09:54.934 11498.589 - 11558.167: 72.5908% ( 89) 00:09:54.934 11558.167 - 11617.745: 73.4375% ( 97) 00:09:54.934 11617.745 - 11677.324: 74.1882% ( 86) 00:09:54.935 11677.324 - 11736.902: 75.0000% ( 93) 00:09:54.935 11736.902 - 11796.480: 75.6983% ( 80) 00:09:54.935 11796.480 - 11856.058: 76.3443% ( 74) 00:09:54.935 11856.058 - 11915.636: 77.0339% ( 79) 00:09:54.935 11915.636 - 11975.215: 77.6798% ( 74) 00:09:54.935 11975.215 - 12034.793: 78.2647% ( 67) 00:09:54.935 12034.793 - 12094.371: 78.8670% ( 69) 00:09:54.935 12094.371 - 12153.949: 79.4867% ( 71) 00:09:54.935 12153.949 - 12213.527: 80.1065% ( 71) 00:09:54.935 12213.527 - 12273.105: 80.7350% ( 72) 00:09:54.935 12273.105 - 12332.684: 81.4159% ( 78) 00:09:54.935 12332.684 - 12392.262: 82.1491% ( 84) 00:09:54.935 12392.262 - 12451.840: 82.8125% ( 76) 00:09:54.935 12451.840 - 12511.418: 83.4672% ( 75) 00:09:54.935 12511.418 - 12570.996: 84.1742% ( 81) 00:09:54.935 12570.996 - 12630.575: 84.8638% ( 79) 00:09:54.935 12630.575 - 12690.153: 85.5447% ( 78) 00:09:54.935 12690.153 - 12749.731: 86.1732% ( 72) 00:09:54.935 12749.731 - 12809.309: 86.8104% ( 73) 00:09:54.935 12809.309 - 12868.887: 87.4040% ( 68) 00:09:54.935 12868.887 - 12928.465: 87.9801% ( 66) 00:09:54.935 12928.465 - 12988.044: 88.6173% ( 73) 00:09:54.935 12988.044 - 13047.622: 89.1498% ( 61) 00:09:54.935 13047.622 - 13107.200: 89.5775% ( 49) 00:09:54.935 13107.200 - 13166.778: 90.1274% ( 63) 00:09:54.935 13166.778 - 13226.356: 90.6163% ( 56) 00:09:54.935 13226.356 - 13285.935: 91.0702% ( 52) 00:09:54.935 13285.935 - 13345.513: 91.5765% ( 58) 00:09:54.935 13345.513 - 13405.091: 92.0653% ( 56) 00:09:54.935 13405.091 - 13464.669: 92.4668% ( 46) 00:09:54.935 13464.669 - 13524.247: 92.8596% ( 45) 00:09:54.935 13524.247 - 13583.825: 93.2612% ( 46) 00:09:54.935 13583.825 - 13643.404: 93.6365% ( 43) 00:09:54.935 13643.404 - 13702.982: 93.9857% ( 40) 00:09:54.935 13702.982 - 13762.560: 94.2563% ( 31) 00:09:54.935 13762.560 - 13822.138: 94.5618% ( 35) 00:09:54.935 13822.138 - 13881.716: 94.7626% ( 23) 00:09:54.935 13881.716 - 13941.295: 94.9633% ( 23) 00:09:54.935 13941.295 - 14000.873: 95.1205% ( 18) 00:09:54.935 14000.873 - 14060.451: 95.2950% ( 20) 00:09:54.935 14060.451 - 14120.029: 95.4522% ( 18) 00:09:54.935 14120.029 - 14179.607: 95.6093% ( 18) 00:09:54.935 14179.607 - 14239.185: 95.7402% ( 15) 00:09:54.935 14239.185 - 14298.764: 95.8799% ( 16) 00:09:54.935 14298.764 - 14358.342: 96.0545% ( 20) 00:09:54.935 14358.342 - 14417.920: 96.1418% ( 10) 00:09:54.935 14417.920 - 14477.498: 96.2814% ( 16) 00:09:54.935 14477.498 - 14537.076: 96.4124% ( 15) 00:09:54.935 14537.076 - 14596.655: 96.4647% ( 6) 00:09:54.935 14596.655 - 14656.233: 96.6131% ( 17) 00:09:54.935 14656.233 - 14715.811: 96.7266% ( 13) 00:09:54.935 14715.811 - 14775.389: 96.8314% ( 12) 00:09:54.935 14775.389 - 14834.967: 96.9186% ( 10) 00:09:54.935 14834.967 - 14894.545: 97.0234% ( 12) 00:09:54.935 14894.545 - 14954.124: 97.1369% ( 13) 00:09:54.935 14954.124 - 15013.702: 97.2067% ( 8) 00:09:54.935 15013.702 - 15073.280: 97.3027% ( 11) 00:09:54.935 15073.280 - 15132.858: 97.3813% ( 9) 00:09:54.935 15132.858 - 15192.436: 97.4686% ( 10) 00:09:54.935 15192.436 - 15252.015: 97.5471% ( 9) 00:09:54.935 15252.015 - 15371.171: 97.6693% ( 14) 00:09:54.935 15371.171 - 15490.327: 97.7916% ( 14) 00:09:54.935 15490.327 - 15609.484: 97.9138% ( 14) 00:09:54.935 15609.484 - 15728.640: 98.0272% ( 13) 00:09:54.935 15728.640 - 15847.796: 98.1233% ( 11) 00:09:54.935 15847.796 - 15966.953: 98.2455% ( 14) 00:09:54.935 15966.953 - 16086.109: 98.3589% ( 13) 00:09:54.935 16086.109 - 16205.265: 98.4550% ( 11) 00:09:54.935 16205.265 - 16324.422: 98.5510% ( 11) 00:09:54.935 16324.422 - 16443.578: 98.5859% ( 4) 00:09:54.935 16443.578 - 16562.735: 98.6295% ( 5) 00:09:54.935 16562.735 - 16681.891: 98.6645% ( 4) 00:09:54.935 16681.891 - 16801.047: 98.7168% ( 6) 00:09:54.935 16801.047 - 16920.204: 98.7605% ( 5) 00:09:54.935 16920.204 - 17039.360: 98.7954% ( 4) 00:09:54.935 17039.360 - 17158.516: 98.8303% ( 4) 00:09:54.935 17158.516 - 17277.673: 98.8740% ( 5) 00:09:54.935 17277.673 - 17396.829: 98.8827% ( 1) 00:09:54.935 25499.462 - 25618.618: 98.8914% ( 1) 00:09:54.935 25618.618 - 25737.775: 98.9089% ( 2) 00:09:54.935 25737.775 - 25856.931: 98.9525% ( 5) 00:09:54.935 25856.931 - 25976.087: 98.9700% ( 2) 00:09:54.935 25976.087 - 26095.244: 99.0049% ( 4) 00:09:54.935 26095.244 - 26214.400: 99.0398% ( 4) 00:09:54.935 26214.400 - 26333.556: 99.0660% ( 3) 00:09:54.935 26333.556 - 26452.713: 99.1009% ( 4) 00:09:54.935 26452.713 - 26571.869: 99.1358% ( 4) 00:09:54.935 26571.869 - 26691.025: 99.1707% ( 4) 00:09:54.935 26691.025 - 26810.182: 99.1969% ( 3) 00:09:54.935 26810.182 - 26929.338: 99.2318% ( 4) 00:09:54.935 26929.338 - 27048.495: 99.2668% ( 4) 00:09:54.935 27048.495 - 27167.651: 99.3017% ( 4) 00:09:54.935 27167.651 - 27286.807: 99.3453% ( 5) 00:09:54.935 27286.807 - 27405.964: 99.3628% ( 2) 00:09:54.935 27405.964 - 27525.120: 99.3977% ( 4) 00:09:54.935 27525.120 - 27644.276: 99.4326% ( 4) 00:09:54.935 27644.276 - 27763.433: 99.4413% ( 1) 00:09:54.935 32887.156 - 33125.469: 99.4588% ( 2) 00:09:54.935 33125.469 - 33363.782: 99.5374% ( 9) 00:09:54.935 33363.782 - 33602.095: 99.6072% ( 8) 00:09:54.935 33602.095 - 33840.407: 99.6683% ( 7) 00:09:54.935 33840.407 - 34078.720: 99.7294% ( 7) 00:09:54.935 34078.720 - 34317.033: 99.7992% ( 8) 00:09:54.935 34317.033 - 34555.345: 99.8603% ( 7) 00:09:54.935 34555.345 - 34793.658: 99.9302% ( 8) 00:09:54.935 34793.658 - 35031.971: 99.9913% ( 7) 00:09:54.935 35031.971 - 35270.284: 100.0000% ( 1) 00:09:54.935 00:09:54.935 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:54.935 ============================================================================== 00:09:54.935 Range in us Cumulative IO count 00:09:54.935 8043.055 - 8102.633: 0.0087% ( 1) 00:09:54.935 8102.633 - 8162.211: 0.0611% ( 6) 00:09:54.935 8162.211 - 8221.789: 0.0873% ( 3) 00:09:54.935 8221.789 - 8281.367: 0.1222% ( 4) 00:09:54.935 8281.367 - 8340.945: 0.1746% ( 6) 00:09:54.935 8340.945 - 8400.524: 0.2270% ( 6) 00:09:54.935 8400.524 - 8460.102: 0.2968% ( 8) 00:09:54.935 8460.102 - 8519.680: 0.4714% ( 20) 00:09:54.935 8519.680 - 8579.258: 0.5936% ( 14) 00:09:54.935 8579.258 - 8638.836: 0.7682% ( 20) 00:09:54.935 8638.836 - 8698.415: 0.9951% ( 26) 00:09:54.935 8698.415 - 8757.993: 1.2832% ( 33) 00:09:54.935 8757.993 - 8817.571: 1.6323% ( 40) 00:09:54.935 8817.571 - 8877.149: 2.0426% ( 47) 00:09:54.935 8877.149 - 8936.727: 2.4791% ( 50) 00:09:54.935 8936.727 - 8996.305: 2.8980% ( 48) 00:09:54.935 8996.305 - 9055.884: 3.4480% ( 63) 00:09:54.935 9055.884 - 9115.462: 3.9717% ( 60) 00:09:54.935 9115.462 - 9175.040: 4.5129% ( 62) 00:09:54.935 9175.040 - 9234.618: 5.1414% ( 72) 00:09:54.935 9234.618 - 9294.196: 5.7088% ( 65) 00:09:54.935 9294.196 - 9353.775: 6.3460% ( 73) 00:09:54.935 9353.775 - 9413.353: 7.1840% ( 96) 00:09:54.935 9413.353 - 9472.931: 8.2577% ( 123) 00:09:54.935 9472.931 - 9532.509: 9.4623% ( 138) 00:09:54.935 9532.509 - 9592.087: 10.8764% ( 162) 00:09:54.935 9592.087 - 9651.665: 12.4389% ( 179) 00:09:54.935 9651.665 - 9711.244: 14.0712% ( 187) 00:09:54.935 9711.244 - 9770.822: 15.8781% ( 207) 00:09:54.935 9770.822 - 9830.400: 17.7636% ( 216) 00:09:54.935 9830.400 - 9889.978: 19.9022% ( 245) 00:09:54.935 9889.978 - 9949.556: 22.0583% ( 247) 00:09:54.935 9949.556 - 10009.135: 24.2842% ( 255) 00:09:54.935 10009.135 - 10068.713: 26.6498% ( 271) 00:09:54.935 10068.713 - 10128.291: 29.0328% ( 273) 00:09:54.935 10128.291 - 10187.869: 31.3460% ( 265) 00:09:54.935 10187.869 - 10247.447: 33.6243% ( 261) 00:09:54.935 10247.447 - 10307.025: 36.1906% ( 294) 00:09:54.935 10307.025 - 10366.604: 38.5999% ( 276) 00:09:54.935 10366.604 - 10426.182: 41.0353% ( 279) 00:09:54.935 10426.182 - 10485.760: 43.5841% ( 292) 00:09:54.935 10485.760 - 10545.338: 46.1243% ( 291) 00:09:54.935 10545.338 - 10604.916: 48.4200% ( 263) 00:09:54.935 10604.916 - 10664.495: 50.7507% ( 267) 00:09:54.935 10664.495 - 10724.073: 52.8369% ( 239) 00:09:54.935 10724.073 - 10783.651: 54.9406% ( 241) 00:09:54.935 10783.651 - 10843.229: 56.9221% ( 227) 00:09:54.935 10843.229 - 10902.807: 58.9473% ( 232) 00:09:54.935 10902.807 - 10962.385: 60.7804% ( 210) 00:09:54.935 10962.385 - 11021.964: 62.4564% ( 192) 00:09:54.935 11021.964 - 11081.542: 63.9839% ( 175) 00:09:54.935 11081.542 - 11141.120: 65.4068% ( 163) 00:09:54.935 11141.120 - 11200.698: 66.6987% ( 148) 00:09:54.935 11200.698 - 11260.276: 67.9295% ( 141) 00:09:54.935 11260.276 - 11319.855: 69.0381% ( 127) 00:09:54.935 11319.855 - 11379.433: 70.1117% ( 123) 00:09:54.935 11379.433 - 11439.011: 71.1068% ( 114) 00:09:54.935 11439.011 - 11498.589: 72.0234% ( 105) 00:09:54.935 11498.589 - 11558.167: 72.9399% ( 105) 00:09:54.935 11558.167 - 11617.745: 73.7692% ( 95) 00:09:54.935 11617.745 - 11677.324: 74.6421% ( 100) 00:09:54.935 11677.324 - 11736.902: 75.4452% ( 92) 00:09:54.935 11736.902 - 11796.480: 76.1784% ( 84) 00:09:54.935 11796.480 - 11856.058: 76.8593% ( 78) 00:09:54.935 11856.058 - 11915.636: 77.4616% ( 69) 00:09:54.935 11915.636 - 11975.215: 78.0901% ( 72) 00:09:54.935 11975.215 - 12034.793: 78.7360% ( 74) 00:09:54.935 12034.793 - 12094.371: 79.3645% ( 72) 00:09:54.935 12094.371 - 12153.949: 80.0192% ( 75) 00:09:54.935 12153.949 - 12213.527: 80.6652% ( 74) 00:09:54.935 12213.527 - 12273.105: 81.3547% ( 79) 00:09:54.935 12273.105 - 12332.684: 82.0094% ( 75) 00:09:54.935 12332.684 - 12392.262: 82.6292% ( 71) 00:09:54.935 12392.262 - 12451.840: 83.2490% ( 71) 00:09:54.935 12451.840 - 12511.418: 83.8251% ( 66) 00:09:54.935 12511.418 - 12570.996: 84.4710% ( 74) 00:09:54.935 12570.996 - 12630.575: 85.0646% ( 68) 00:09:54.935 12630.575 - 12690.153: 85.7542% ( 79) 00:09:54.935 12690.153 - 12749.731: 86.4525% ( 80) 00:09:54.935 12749.731 - 12809.309: 87.0810% ( 72) 00:09:54.935 12809.309 - 12868.887: 87.7531% ( 77) 00:09:54.935 12868.887 - 12928.465: 88.3293% ( 66) 00:09:54.935 12928.465 - 12988.044: 88.9054% ( 66) 00:09:54.935 12988.044 - 13047.622: 89.3855% ( 55) 00:09:54.935 13047.622 - 13107.200: 89.8656% ( 55) 00:09:54.935 13107.200 - 13166.778: 90.3544% ( 56) 00:09:54.935 13166.778 - 13226.356: 90.8170% ( 53) 00:09:54.936 13226.356 - 13285.935: 91.2971% ( 55) 00:09:54.936 13285.935 - 13345.513: 91.7685% ( 54) 00:09:54.936 13345.513 - 13405.091: 92.2399% ( 54) 00:09:54.936 13405.091 - 13464.669: 92.7287% ( 56) 00:09:54.936 13464.669 - 13524.247: 93.1652% ( 50) 00:09:54.936 13524.247 - 13583.825: 93.5143% ( 40) 00:09:54.936 13583.825 - 13643.404: 93.8373% ( 37) 00:09:54.936 13643.404 - 13702.982: 94.1079% ( 31) 00:09:54.936 13702.982 - 13762.560: 94.3436% ( 27) 00:09:54.936 13762.560 - 13822.138: 94.5618% ( 25) 00:09:54.936 13822.138 - 13881.716: 94.7277% ( 19) 00:09:54.936 13881.716 - 13941.295: 94.9197% ( 22) 00:09:54.936 13941.295 - 14000.873: 95.0943% ( 20) 00:09:54.936 14000.873 - 14060.451: 95.2601% ( 19) 00:09:54.936 14060.451 - 14120.029: 95.4085% ( 17) 00:09:54.936 14120.029 - 14179.607: 95.5569% ( 17) 00:09:54.936 14179.607 - 14239.185: 95.7140% ( 18) 00:09:54.936 14239.185 - 14298.764: 95.8537% ( 16) 00:09:54.936 14298.764 - 14358.342: 96.0021% ( 17) 00:09:54.936 14358.342 - 14417.920: 96.1505% ( 17) 00:09:54.936 14417.920 - 14477.498: 96.3076% ( 18) 00:09:54.936 14477.498 - 14537.076: 96.4211% ( 13) 00:09:54.936 14537.076 - 14596.655: 96.5695% ( 17) 00:09:54.936 14596.655 - 14656.233: 96.6917% ( 14) 00:09:54.936 14656.233 - 14715.811: 96.7964% ( 12) 00:09:54.936 14715.811 - 14775.389: 96.9186% ( 14) 00:09:54.936 14775.389 - 14834.967: 96.9972% ( 9) 00:09:54.936 14834.967 - 14894.545: 97.0932% ( 11) 00:09:54.936 14894.545 - 14954.124: 97.1456% ( 6) 00:09:54.936 14954.124 - 15013.702: 97.2067% ( 7) 00:09:54.936 15013.702 - 15073.280: 97.2591% ( 6) 00:09:54.936 15073.280 - 15132.858: 97.3376% ( 9) 00:09:54.936 15132.858 - 15192.436: 97.3900% ( 6) 00:09:54.936 15192.436 - 15252.015: 97.4686% ( 9) 00:09:54.936 15252.015 - 15371.171: 97.6432% ( 20) 00:09:54.936 15371.171 - 15490.327: 97.7566% ( 13) 00:09:54.936 15490.327 - 15609.484: 97.8527% ( 11) 00:09:54.936 15609.484 - 15728.640: 97.9225% ( 8) 00:09:54.936 15728.640 - 15847.796: 98.0098% ( 10) 00:09:54.936 15847.796 - 15966.953: 98.0971% ( 10) 00:09:54.936 15966.953 - 16086.109: 98.1931% ( 11) 00:09:54.936 16086.109 - 16205.265: 98.2891% ( 11) 00:09:54.936 16205.265 - 16324.422: 98.3851% ( 11) 00:09:54.936 16324.422 - 16443.578: 98.4724% ( 10) 00:09:54.936 16443.578 - 16562.735: 98.5597% ( 10) 00:09:54.936 16562.735 - 16681.891: 98.6121% ( 6) 00:09:54.936 16681.891 - 16801.047: 98.6645% ( 6) 00:09:54.936 16801.047 - 16920.204: 98.7081% ( 5) 00:09:54.936 16920.204 - 17039.360: 98.7517% ( 5) 00:09:54.936 17039.360 - 17158.516: 98.7954% ( 5) 00:09:54.936 17158.516 - 17277.673: 98.8478% ( 6) 00:09:54.936 17277.673 - 17396.829: 98.8827% ( 4) 00:09:54.936 25022.836 - 25141.993: 98.8914% ( 1) 00:09:54.936 25141.993 - 25261.149: 98.9176% ( 3) 00:09:54.936 25261.149 - 25380.305: 98.9612% ( 5) 00:09:54.936 25380.305 - 25499.462: 98.9962% ( 4) 00:09:54.936 25499.462 - 25618.618: 99.0311% ( 4) 00:09:54.936 25618.618 - 25737.775: 99.0747% ( 5) 00:09:54.936 25737.775 - 25856.931: 99.1096% ( 4) 00:09:54.936 25856.931 - 25976.087: 99.1446% ( 4) 00:09:54.936 25976.087 - 26095.244: 99.1882% ( 5) 00:09:54.936 26095.244 - 26214.400: 99.2231% ( 4) 00:09:54.936 26214.400 - 26333.556: 99.2580% ( 4) 00:09:54.936 26333.556 - 26452.713: 99.2929% ( 4) 00:09:54.936 26452.713 - 26571.869: 99.3366% ( 5) 00:09:54.936 26571.869 - 26691.025: 99.3715% ( 4) 00:09:54.936 26691.025 - 26810.182: 99.4064% ( 4) 00:09:54.936 26810.182 - 26929.338: 99.4413% ( 4) 00:09:54.936 32172.218 - 32410.531: 99.4675% ( 3) 00:09:54.936 32410.531 - 32648.844: 99.5374% ( 8) 00:09:54.936 32648.844 - 32887.156: 99.5985% ( 7) 00:09:54.936 32887.156 - 33125.469: 99.6770% ( 9) 00:09:54.936 33125.469 - 33363.782: 99.7469% ( 8) 00:09:54.936 33363.782 - 33602.095: 99.8254% ( 9) 00:09:54.936 33602.095 - 33840.407: 99.8953% ( 8) 00:09:54.936 33840.407 - 34078.720: 99.9738% ( 9) 00:09:54.936 34078.720 - 34317.033: 100.0000% ( 3) 00:09:54.936 00:09:54.936 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:54.936 ============================================================================== 00:09:54.936 Range in us Cumulative IO count 00:09:54.936 7030.225 - 7060.015: 0.0175% ( 2) 00:09:54.936 7060.015 - 7089.804: 0.0349% ( 2) 00:09:54.936 7089.804 - 7119.593: 0.0436% ( 1) 00:09:54.936 7119.593 - 7149.382: 0.0611% ( 2) 00:09:54.936 7149.382 - 7179.171: 0.0873% ( 3) 00:09:54.936 7179.171 - 7208.960: 0.0960% ( 1) 00:09:54.936 7208.960 - 7238.749: 0.1047% ( 1) 00:09:54.936 7238.749 - 7268.538: 0.1309% ( 3) 00:09:54.936 7268.538 - 7298.327: 0.1571% ( 3) 00:09:54.936 7298.327 - 7328.116: 0.1659% ( 1) 00:09:54.936 7328.116 - 7357.905: 0.1833% ( 2) 00:09:54.936 7357.905 - 7387.695: 0.1920% ( 1) 00:09:54.936 7387.695 - 7417.484: 0.2095% ( 2) 00:09:54.936 7417.484 - 7447.273: 0.2182% ( 1) 00:09:54.936 7447.273 - 7477.062: 0.2270% ( 1) 00:09:54.936 7477.062 - 7506.851: 0.2444% ( 2) 00:09:54.936 7506.851 - 7536.640: 0.2619% ( 2) 00:09:54.936 7536.640 - 7566.429: 0.2706% ( 1) 00:09:54.936 7566.429 - 7596.218: 0.2881% ( 2) 00:09:54.936 7596.218 - 7626.007: 0.3055% ( 2) 00:09:54.936 7626.007 - 7685.585: 0.3404% ( 4) 00:09:54.936 7685.585 - 7745.164: 0.3666% ( 3) 00:09:54.936 7745.164 - 7804.742: 0.4015% ( 4) 00:09:54.936 7804.742 - 7864.320: 0.4277% ( 3) 00:09:54.936 7864.320 - 7923.898: 0.4626% ( 4) 00:09:54.936 7923.898 - 7983.476: 0.4976% ( 4) 00:09:54.936 7983.476 - 8043.055: 0.5325% ( 4) 00:09:54.936 8043.055 - 8102.633: 0.5587% ( 3) 00:09:54.936 8400.524 - 8460.102: 0.5761% ( 2) 00:09:54.936 8460.102 - 8519.680: 0.6634% ( 10) 00:09:54.936 8519.680 - 8579.258: 0.7943% ( 15) 00:09:54.936 8579.258 - 8638.836: 0.9777% ( 21) 00:09:54.936 8638.836 - 8698.415: 1.2046% ( 26) 00:09:54.936 8698.415 - 8757.993: 1.4403% ( 27) 00:09:54.936 8757.993 - 8817.571: 1.7022% ( 30) 00:09:54.936 8817.571 - 8877.149: 2.0077% ( 35) 00:09:54.936 8877.149 - 8936.727: 2.3394% ( 38) 00:09:54.936 8936.727 - 8996.305: 2.7671% ( 49) 00:09:54.936 8996.305 - 9055.884: 3.2472% ( 55) 00:09:54.936 9055.884 - 9115.462: 3.7535% ( 58) 00:09:54.936 9115.462 - 9175.040: 4.3471% ( 68) 00:09:54.936 9175.040 - 9234.618: 4.8970% ( 63) 00:09:54.936 9234.618 - 9294.196: 5.5604% ( 76) 00:09:54.936 9294.196 - 9353.775: 6.1627% ( 69) 00:09:54.936 9353.775 - 9413.353: 6.9134% ( 86) 00:09:54.936 9413.353 - 9472.931: 7.8038% ( 102) 00:09:54.936 9472.931 - 9532.509: 9.0346% ( 141) 00:09:54.936 9532.509 - 9592.087: 10.3701% ( 153) 00:09:54.936 9592.087 - 9651.665: 11.9850% ( 185) 00:09:54.936 9651.665 - 9711.244: 13.7046% ( 197) 00:09:54.936 9711.244 - 9770.822: 15.4679% ( 202) 00:09:54.936 9770.822 - 9830.400: 17.2922% ( 209) 00:09:54.936 9830.400 - 9889.978: 19.3610% ( 237) 00:09:54.936 9889.978 - 9949.556: 21.6917% ( 267) 00:09:54.936 9949.556 - 10009.135: 24.0573% ( 271) 00:09:54.936 10009.135 - 10068.713: 26.5887% ( 290) 00:09:54.936 10068.713 - 10128.291: 28.9804% ( 274) 00:09:54.936 10128.291 - 10187.869: 31.3111% ( 267) 00:09:54.936 10187.869 - 10247.447: 33.8076% ( 286) 00:09:54.936 10247.447 - 10307.025: 36.3565% ( 292) 00:09:54.936 10307.025 - 10366.604: 38.8181% ( 282) 00:09:54.936 10366.604 - 10426.182: 41.3233% ( 287) 00:09:54.936 10426.182 - 10485.760: 43.9682% ( 303) 00:09:54.936 10485.760 - 10545.338: 46.5171% ( 292) 00:09:54.936 10545.338 - 10604.916: 48.9962% ( 284) 00:09:54.936 10604.916 - 10664.495: 51.1959% ( 252) 00:09:54.936 10664.495 - 10724.073: 53.2821% ( 239) 00:09:54.936 10724.073 - 10783.651: 55.3946% ( 242) 00:09:54.936 10783.651 - 10843.229: 57.3760% ( 227) 00:09:54.936 10843.229 - 10902.807: 59.2877% ( 219) 00:09:54.936 10902.807 - 10962.385: 61.1645% ( 215) 00:09:54.936 10962.385 - 11021.964: 62.7619% ( 183) 00:09:54.936 11021.964 - 11081.542: 64.2458% ( 170) 00:09:54.936 11081.542 - 11141.120: 65.6075% ( 156) 00:09:54.936 11141.120 - 11200.698: 66.8558% ( 143) 00:09:54.936 11200.698 - 11260.276: 68.0168% ( 133) 00:09:54.936 11260.276 - 11319.855: 69.1341% ( 128) 00:09:54.936 11319.855 - 11379.433: 70.1816% ( 120) 00:09:54.936 11379.433 - 11439.011: 71.2552% ( 123) 00:09:54.936 11439.011 - 11498.589: 72.1718% ( 105) 00:09:54.936 11498.589 - 11558.167: 73.1494% ( 112) 00:09:54.936 11558.167 - 11617.745: 74.0485% ( 103) 00:09:54.936 11617.745 - 11677.324: 74.8953% ( 97) 00:09:54.936 11677.324 - 11736.902: 75.6372% ( 85) 00:09:54.936 11736.902 - 11796.480: 76.3355% ( 80) 00:09:54.936 11796.480 - 11856.058: 76.9815% ( 74) 00:09:54.936 11856.058 - 11915.636: 77.6536% ( 77) 00:09:54.936 11915.636 - 11975.215: 78.2996% ( 74) 00:09:54.936 11975.215 - 12034.793: 78.9019% ( 69) 00:09:54.936 12034.793 - 12094.371: 79.5216% ( 71) 00:09:54.936 12094.371 - 12153.949: 80.1501% ( 72) 00:09:54.936 12153.949 - 12213.527: 80.8135% ( 76) 00:09:54.936 12213.527 - 12273.105: 81.4944% ( 78) 00:09:54.936 12273.105 - 12332.684: 82.1054% ( 70) 00:09:54.936 12332.684 - 12392.262: 82.6903% ( 67) 00:09:54.936 12392.262 - 12451.840: 83.1878% ( 57) 00:09:54.936 12451.840 - 12511.418: 83.6941% ( 58) 00:09:54.936 12511.418 - 12570.996: 84.1655% ( 54) 00:09:54.936 12570.996 - 12630.575: 84.6805% ( 59) 00:09:54.936 12630.575 - 12690.153: 85.1868% ( 58) 00:09:54.936 12690.153 - 12749.731: 85.7018% ( 59) 00:09:54.936 12749.731 - 12809.309: 86.2517% ( 63) 00:09:54.936 12809.309 - 12868.887: 86.8104% ( 64) 00:09:54.936 12868.887 - 12928.465: 87.4476% ( 73) 00:09:54.936 12928.465 - 12988.044: 88.0761% ( 72) 00:09:54.936 12988.044 - 13047.622: 88.6959% ( 71) 00:09:54.936 13047.622 - 13107.200: 89.3418% ( 74) 00:09:54.936 13107.200 - 13166.778: 89.9179% ( 66) 00:09:54.936 13166.778 - 13226.356: 90.3980% ( 55) 00:09:54.936 13226.356 - 13285.935: 90.9480% ( 63) 00:09:54.936 13285.935 - 13345.513: 91.4543% ( 58) 00:09:54.936 13345.513 - 13405.091: 91.9518% ( 57) 00:09:54.936 13405.091 - 13464.669: 92.4581% ( 58) 00:09:54.936 13464.669 - 13524.247: 92.9382% ( 55) 00:09:54.936 13524.247 - 13583.825: 93.3310% ( 45) 00:09:54.936 13583.825 - 13643.404: 93.7151% ( 44) 00:09:54.937 13643.404 - 13702.982: 94.0642% ( 40) 00:09:54.937 13702.982 - 13762.560: 94.4309% ( 42) 00:09:54.937 13762.560 - 13822.138: 94.7189% ( 33) 00:09:54.937 13822.138 - 13881.716: 94.9459% ( 26) 00:09:54.937 13881.716 - 13941.295: 95.1990% ( 29) 00:09:54.937 13941.295 - 14000.873: 95.4434% ( 28) 00:09:54.937 14000.873 - 14060.451: 95.6355% ( 22) 00:09:54.937 14060.451 - 14120.029: 95.8188% ( 21) 00:09:54.937 14120.029 - 14179.607: 95.9846% ( 19) 00:09:54.937 14179.607 - 14239.185: 96.1592% ( 20) 00:09:54.937 14239.185 - 14298.764: 96.3076% ( 17) 00:09:54.937 14298.764 - 14358.342: 96.4298% ( 14) 00:09:54.937 14358.342 - 14417.920: 96.5782% ( 17) 00:09:54.937 14417.920 - 14477.498: 96.7091% ( 15) 00:09:54.937 14477.498 - 14537.076: 96.8052% ( 11) 00:09:54.937 14537.076 - 14596.655: 96.9012% ( 11) 00:09:54.937 14596.655 - 14656.233: 96.9885% ( 10) 00:09:54.937 14656.233 - 14715.811: 97.0409% ( 6) 00:09:54.937 14715.811 - 14775.389: 97.1194% ( 9) 00:09:54.937 14775.389 - 14834.967: 97.1805% ( 7) 00:09:54.937 14834.967 - 14894.545: 97.2416% ( 7) 00:09:54.937 14894.545 - 14954.124: 97.3115% ( 8) 00:09:54.937 14954.124 - 15013.702: 97.3813% ( 8) 00:09:54.937 15013.702 - 15073.280: 97.4337% ( 6) 00:09:54.937 15073.280 - 15132.858: 97.4773% ( 5) 00:09:54.937 15132.858 - 15192.436: 97.5209% ( 5) 00:09:54.937 15192.436 - 15252.015: 97.5995% ( 9) 00:09:54.937 15252.015 - 15371.171: 97.7304% ( 15) 00:09:54.937 15371.171 - 15490.327: 97.9050% ( 20) 00:09:54.937 15490.327 - 15609.484: 98.0622% ( 18) 00:09:54.937 15609.484 - 15728.640: 98.2280% ( 19) 00:09:54.937 15728.640 - 15847.796: 98.3415% ( 13) 00:09:54.937 15847.796 - 15966.953: 98.4288% ( 10) 00:09:54.937 15966.953 - 16086.109: 98.5073% ( 9) 00:09:54.937 16086.109 - 16205.265: 98.5946% ( 10) 00:09:54.937 16205.265 - 16324.422: 98.6732% ( 9) 00:09:54.937 16324.422 - 16443.578: 98.7605% ( 10) 00:09:54.937 16443.578 - 16562.735: 98.8128% ( 6) 00:09:54.937 16562.735 - 16681.891: 98.8478% ( 4) 00:09:54.937 16681.891 - 16801.047: 98.8827% ( 4) 00:09:54.937 24427.055 - 24546.211: 98.9001% ( 2) 00:09:54.937 24546.211 - 24665.367: 98.9351% ( 4) 00:09:54.937 24665.367 - 24784.524: 98.9700% ( 4) 00:09:54.937 24784.524 - 24903.680: 99.0136% ( 5) 00:09:54.937 24903.680 - 25022.836: 99.0485% ( 4) 00:09:54.937 25022.836 - 25141.993: 99.0834% ( 4) 00:09:54.937 25141.993 - 25261.149: 99.1184% ( 4) 00:09:54.937 25261.149 - 25380.305: 99.1533% ( 4) 00:09:54.937 25380.305 - 25499.462: 99.1882% ( 4) 00:09:54.937 25499.462 - 25618.618: 99.2318% ( 5) 00:09:54.937 25618.618 - 25737.775: 99.2668% ( 4) 00:09:54.937 25737.775 - 25856.931: 99.2929% ( 3) 00:09:54.937 25856.931 - 25976.087: 99.3279% ( 4) 00:09:54.937 25976.087 - 26095.244: 99.3628% ( 4) 00:09:54.937 26095.244 - 26214.400: 99.3977% ( 4) 00:09:54.937 26214.400 - 26333.556: 99.4326% ( 4) 00:09:54.937 26333.556 - 26452.713: 99.4413% ( 1) 00:09:54.937 31695.593 - 31933.905: 99.4501% ( 1) 00:09:54.937 31933.905 - 32172.218: 99.5112% ( 7) 00:09:54.937 32172.218 - 32410.531: 99.5810% ( 8) 00:09:54.937 32410.531 - 32648.844: 99.6421% ( 7) 00:09:54.937 32648.844 - 32887.156: 99.7207% ( 9) 00:09:54.937 32887.156 - 33125.469: 99.7992% ( 9) 00:09:54.937 33125.469 - 33363.782: 99.8691% ( 8) 00:09:54.937 33363.782 - 33602.095: 99.9476% ( 9) 00:09:54.937 33602.095 - 33840.407: 100.0000% ( 6) 00:09:54.937 00:09:54.937 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:54.937 ============================================================================== 00:09:54.937 Range in us Cumulative IO count 00:09:54.937 6613.178 - 6642.967: 0.0087% ( 1) 00:09:54.937 6642.967 - 6672.756: 0.0175% ( 1) 00:09:54.937 6702.545 - 6732.335: 0.0262% ( 1) 00:09:54.937 6732.335 - 6762.124: 0.0349% ( 1) 00:09:54.937 6762.124 - 6791.913: 0.0611% ( 3) 00:09:54.937 6791.913 - 6821.702: 0.0786% ( 2) 00:09:54.937 6821.702 - 6851.491: 0.0960% ( 2) 00:09:54.937 6851.491 - 6881.280: 0.1135% ( 2) 00:09:54.937 6881.280 - 6911.069: 0.1309% ( 2) 00:09:54.937 6911.069 - 6940.858: 0.1484% ( 2) 00:09:54.937 6940.858 - 6970.647: 0.1659% ( 2) 00:09:54.937 6970.647 - 7000.436: 0.1833% ( 2) 00:09:54.937 7000.436 - 7030.225: 0.1920% ( 1) 00:09:54.937 7030.225 - 7060.015: 0.2095% ( 2) 00:09:54.937 7060.015 - 7089.804: 0.2270% ( 2) 00:09:54.937 7089.804 - 7119.593: 0.2444% ( 2) 00:09:54.937 7119.593 - 7149.382: 0.2531% ( 1) 00:09:54.937 7149.382 - 7179.171: 0.2706% ( 2) 00:09:54.937 7179.171 - 7208.960: 0.2881% ( 2) 00:09:54.937 7208.960 - 7238.749: 0.2968% ( 1) 00:09:54.937 7238.749 - 7268.538: 0.3142% ( 2) 00:09:54.937 7268.538 - 7298.327: 0.3317% ( 2) 00:09:54.937 7298.327 - 7328.116: 0.3492% ( 2) 00:09:54.937 7328.116 - 7357.905: 0.3666% ( 2) 00:09:54.937 7357.905 - 7387.695: 0.3753% ( 1) 00:09:54.937 7387.695 - 7417.484: 0.3928% ( 2) 00:09:54.937 7417.484 - 7447.273: 0.4015% ( 1) 00:09:54.937 7447.273 - 7477.062: 0.4190% ( 2) 00:09:54.937 7477.062 - 7506.851: 0.4365% ( 2) 00:09:54.937 7506.851 - 7536.640: 0.4539% ( 2) 00:09:54.937 7536.640 - 7566.429: 0.4626% ( 1) 00:09:54.937 7566.429 - 7596.218: 0.4801% ( 2) 00:09:54.937 7596.218 - 7626.007: 0.4888% ( 1) 00:09:54.937 7626.007 - 7685.585: 0.5237% ( 4) 00:09:54.937 7685.585 - 7745.164: 0.5499% ( 3) 00:09:54.937 7745.164 - 7804.742: 0.5587% ( 1) 00:09:54.937 8400.524 - 8460.102: 0.5674% ( 1) 00:09:54.937 8460.102 - 8519.680: 0.6634% ( 11) 00:09:54.937 8519.680 - 8579.258: 0.7943% ( 15) 00:09:54.937 8579.258 - 8638.836: 0.9777% ( 21) 00:09:54.937 8638.836 - 8698.415: 1.1697% ( 22) 00:09:54.937 8698.415 - 8757.993: 1.3792% ( 24) 00:09:54.937 8757.993 - 8817.571: 1.6672% ( 33) 00:09:54.937 8817.571 - 8877.149: 1.9902% ( 37) 00:09:54.937 8877.149 - 8936.727: 2.3219% ( 38) 00:09:54.937 8936.727 - 8996.305: 2.7671% ( 51) 00:09:54.937 8996.305 - 9055.884: 3.2996% ( 61) 00:09:54.937 9055.884 - 9115.462: 3.7971% ( 57) 00:09:54.937 9115.462 - 9175.040: 4.3471% ( 63) 00:09:54.937 9175.040 - 9234.618: 4.9406% ( 68) 00:09:54.937 9234.618 - 9294.196: 5.5866% ( 74) 00:09:54.937 9294.196 - 9353.775: 6.3024% ( 82) 00:09:54.937 9353.775 - 9413.353: 7.0269% ( 83) 00:09:54.937 9413.353 - 9472.931: 7.9260% ( 103) 00:09:54.937 9472.931 - 9532.509: 9.1568% ( 141) 00:09:54.937 9532.509 - 9592.087: 10.5010% ( 154) 00:09:54.937 9592.087 - 9651.665: 12.0635% ( 179) 00:09:54.937 9651.665 - 9711.244: 13.7919% ( 198) 00:09:54.937 9711.244 - 9770.822: 15.5290% ( 199) 00:09:54.937 9770.822 - 9830.400: 17.4319% ( 218) 00:09:54.937 9830.400 - 9889.978: 19.5967% ( 248) 00:09:54.937 9889.978 - 9949.556: 21.8925% ( 263) 00:09:54.937 9949.556 - 10009.135: 24.1096% ( 254) 00:09:54.937 10009.135 - 10068.713: 26.5800% ( 283) 00:09:54.937 10068.713 - 10128.291: 28.9106% ( 267) 00:09:54.937 10128.291 - 10187.869: 31.2064% ( 263) 00:09:54.937 10187.869 - 10247.447: 33.6418% ( 279) 00:09:54.937 10247.447 - 10307.025: 36.1645% ( 289) 00:09:54.937 10307.025 - 10366.604: 38.7395% ( 295) 00:09:54.937 10366.604 - 10426.182: 41.3321% ( 297) 00:09:54.937 10426.182 - 10485.760: 43.8984% ( 294) 00:09:54.937 10485.760 - 10545.338: 46.4211% ( 289) 00:09:54.937 10545.338 - 10604.916: 48.9612% ( 291) 00:09:54.937 10604.916 - 10664.495: 51.3617% ( 275) 00:09:54.937 10664.495 - 10724.073: 53.4654% ( 241) 00:09:54.937 10724.073 - 10783.651: 55.4469% ( 227) 00:09:54.937 10783.651 - 10843.229: 57.4459% ( 229) 00:09:54.937 10843.229 - 10902.807: 59.3314% ( 216) 00:09:54.937 10902.807 - 10962.385: 61.1906% ( 213) 00:09:54.937 10962.385 - 11021.964: 62.8841% ( 194) 00:09:54.937 11021.964 - 11081.542: 64.3593% ( 169) 00:09:54.937 11081.542 - 11141.120: 65.7385% ( 158) 00:09:54.937 11141.120 - 11200.698: 67.0216% ( 147) 00:09:54.937 11200.698 - 11260.276: 68.2001% ( 135) 00:09:54.937 11260.276 - 11319.855: 69.3872% ( 136) 00:09:54.937 11319.855 - 11379.433: 70.4696% ( 124) 00:09:54.937 11379.433 - 11439.011: 71.4735% ( 115) 00:09:54.937 11439.011 - 11498.589: 72.4337% ( 110) 00:09:54.937 11498.589 - 11558.167: 73.2629% ( 95) 00:09:54.937 11558.167 - 11617.745: 74.0223% ( 87) 00:09:54.937 11617.745 - 11677.324: 74.7119% ( 79) 00:09:54.937 11677.324 - 11736.902: 75.4015% ( 79) 00:09:54.937 11736.902 - 11796.480: 76.1086% ( 81) 00:09:54.937 11796.480 - 11856.058: 76.6847% ( 66) 00:09:54.937 11856.058 - 11915.636: 77.3394% ( 75) 00:09:54.937 11915.636 - 11975.215: 77.9504% ( 70) 00:09:54.937 11975.215 - 12034.793: 78.5615% ( 70) 00:09:54.937 12034.793 - 12094.371: 79.2423% ( 78) 00:09:54.937 12094.371 - 12153.949: 79.8970% ( 75) 00:09:54.937 12153.949 - 12213.527: 80.5255% ( 72) 00:09:54.937 12213.527 - 12273.105: 81.1278% ( 69) 00:09:54.937 12273.105 - 12332.684: 81.7650% ( 73) 00:09:54.937 12332.684 - 12392.262: 82.3411% ( 66) 00:09:54.937 12392.262 - 12451.840: 82.9347% ( 68) 00:09:54.937 12451.840 - 12511.418: 83.4410% ( 58) 00:09:54.937 12511.418 - 12570.996: 84.0258% ( 67) 00:09:54.937 12570.996 - 12630.575: 84.6107% ( 67) 00:09:54.937 12630.575 - 12690.153: 85.2130% ( 69) 00:09:54.937 12690.153 - 12749.731: 85.7455% ( 61) 00:09:54.937 12749.731 - 12809.309: 86.3216% ( 66) 00:09:54.937 12809.309 - 12868.887: 86.9413% ( 71) 00:09:54.937 12868.887 - 12928.465: 87.4913% ( 63) 00:09:54.937 12928.465 - 12988.044: 88.0412% ( 63) 00:09:54.937 12988.044 - 13047.622: 88.5649% ( 60) 00:09:54.937 13047.622 - 13107.200: 89.0974% ( 61) 00:09:54.937 13107.200 - 13166.778: 89.6910% ( 68) 00:09:54.937 13166.778 - 13226.356: 90.2671% ( 66) 00:09:54.937 13226.356 - 13285.935: 90.8170% ( 63) 00:09:54.937 13285.935 - 13345.513: 91.3495% ( 61) 00:09:54.937 13345.513 - 13405.091: 91.8994% ( 63) 00:09:54.937 13405.091 - 13464.669: 92.4494% ( 63) 00:09:54.937 13464.669 - 13524.247: 92.9906% ( 62) 00:09:54.937 13524.247 - 13583.825: 93.4794% ( 56) 00:09:54.937 13583.825 - 13643.404: 93.8460% ( 42) 00:09:54.937 13643.404 - 13702.982: 94.1865% ( 39) 00:09:54.937 13702.982 - 13762.560: 94.4745% ( 33) 00:09:54.937 13762.560 - 13822.138: 94.7538% ( 32) 00:09:54.938 13822.138 - 13881.716: 95.0157% ( 30) 00:09:54.938 13881.716 - 13941.295: 95.2252% ( 24) 00:09:54.938 13941.295 - 14000.873: 95.4434% ( 25) 00:09:54.938 14000.873 - 14060.451: 95.6529% ( 24) 00:09:54.938 14060.451 - 14120.029: 95.8362% ( 21) 00:09:54.938 14120.029 - 14179.607: 96.0632% ( 26) 00:09:54.938 14179.607 - 14239.185: 96.2465% ( 21) 00:09:54.938 14239.185 - 14298.764: 96.4298% ( 21) 00:09:54.938 14298.764 - 14358.342: 96.6131% ( 21) 00:09:54.938 14358.342 - 14417.920: 96.7703% ( 18) 00:09:54.938 14417.920 - 14477.498: 96.9012% ( 15) 00:09:54.938 14477.498 - 14537.076: 96.9972% ( 11) 00:09:54.938 14537.076 - 14596.655: 97.1281% ( 15) 00:09:54.938 14596.655 - 14656.233: 97.2329% ( 12) 00:09:54.938 14656.233 - 14715.811: 97.3376% ( 12) 00:09:54.938 14715.811 - 14775.389: 97.4424% ( 12) 00:09:54.938 14775.389 - 14834.967: 97.5384% ( 11) 00:09:54.938 14834.967 - 14894.545: 97.6432% ( 12) 00:09:54.938 14894.545 - 14954.124: 97.7304% ( 10) 00:09:54.938 14954.124 - 15013.702: 97.8090% ( 9) 00:09:54.938 15013.702 - 15073.280: 97.8614% ( 6) 00:09:54.938 15073.280 - 15132.858: 97.9050% ( 5) 00:09:54.938 15132.858 - 15192.436: 97.9399% ( 4) 00:09:54.938 15192.436 - 15252.015: 97.9836% ( 5) 00:09:54.938 15252.015 - 15371.171: 98.0709% ( 10) 00:09:54.938 15371.171 - 15490.327: 98.1407% ( 8) 00:09:54.938 15490.327 - 15609.484: 98.2804% ( 16) 00:09:54.938 15609.484 - 15728.640: 98.3677% ( 10) 00:09:54.938 15728.640 - 15847.796: 98.4637% ( 11) 00:09:54.938 15847.796 - 15966.953: 98.5335% ( 8) 00:09:54.938 15966.953 - 16086.109: 98.5772% ( 5) 00:09:54.938 16086.109 - 16205.265: 98.6295% ( 6) 00:09:54.938 16205.265 - 16324.422: 98.6819% ( 6) 00:09:54.938 16324.422 - 16443.578: 98.7256% ( 5) 00:09:54.938 16443.578 - 16562.735: 98.7779% ( 6) 00:09:54.938 16562.735 - 16681.891: 98.8216% ( 5) 00:09:54.938 16681.891 - 16801.047: 98.8652% ( 5) 00:09:54.938 16801.047 - 16920.204: 98.8827% ( 2) 00:09:54.938 23473.804 - 23592.960: 98.8914% ( 1) 00:09:54.938 23592.960 - 23712.116: 98.9263% ( 4) 00:09:54.938 23712.116 - 23831.273: 98.9612% ( 4) 00:09:54.938 23831.273 - 23950.429: 98.9962% ( 4) 00:09:54.938 23950.429 - 24069.585: 99.0311% ( 4) 00:09:54.938 24069.585 - 24188.742: 99.0660% ( 4) 00:09:54.938 24188.742 - 24307.898: 99.1009% ( 4) 00:09:54.938 24307.898 - 24427.055: 99.1358% ( 4) 00:09:54.938 24427.055 - 24546.211: 99.1707% ( 4) 00:09:54.938 24546.211 - 24665.367: 99.2057% ( 4) 00:09:54.938 24665.367 - 24784.524: 99.2406% ( 4) 00:09:54.938 24784.524 - 24903.680: 99.2842% ( 5) 00:09:54.938 24903.680 - 25022.836: 99.3191% ( 4) 00:09:54.938 25022.836 - 25141.993: 99.3453% ( 3) 00:09:54.938 25141.993 - 25261.149: 99.3802% ( 4) 00:09:54.938 25261.149 - 25380.305: 99.4152% ( 4) 00:09:54.938 25380.305 - 25499.462: 99.4413% ( 3) 00:09:54.938 30980.655 - 31218.967: 99.4937% ( 6) 00:09:54.938 31218.967 - 31457.280: 99.5723% ( 9) 00:09:54.938 31457.280 - 31695.593: 99.6421% ( 8) 00:09:54.938 31695.593 - 31933.905: 99.7119% ( 8) 00:09:54.938 31933.905 - 32172.218: 99.7905% ( 9) 00:09:54.938 32172.218 - 32410.531: 99.8603% ( 8) 00:09:54.938 32410.531 - 32648.844: 99.9302% ( 8) 00:09:54.938 32648.844 - 32887.156: 100.0000% ( 8) 00:09:54.938 00:09:54.938 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:54.938 ============================================================================== 00:09:54.938 Range in us Cumulative IO count 00:09:54.938 6255.709 - 6285.498: 0.0087% ( 1) 00:09:54.938 6285.498 - 6315.287: 0.0262% ( 2) 00:09:54.938 6315.287 - 6345.076: 0.0436% ( 2) 00:09:54.938 6345.076 - 6374.865: 0.0611% ( 2) 00:09:54.938 6374.865 - 6404.655: 0.0698% ( 1) 00:09:54.938 6404.655 - 6434.444: 0.0873% ( 2) 00:09:54.938 6434.444 - 6464.233: 0.1135% ( 3) 00:09:54.938 6464.233 - 6494.022: 0.1309% ( 2) 00:09:54.938 6494.022 - 6523.811: 0.1484% ( 2) 00:09:54.938 6523.811 - 6553.600: 0.1659% ( 2) 00:09:54.938 6553.600 - 6583.389: 0.1833% ( 2) 00:09:54.938 6583.389 - 6613.178: 0.1920% ( 1) 00:09:54.938 6613.178 - 6642.967: 0.2095% ( 2) 00:09:54.938 6642.967 - 6672.756: 0.2182% ( 1) 00:09:54.938 6672.756 - 6702.545: 0.2357% ( 2) 00:09:54.938 6702.545 - 6732.335: 0.2444% ( 1) 00:09:54.938 6732.335 - 6762.124: 0.2619% ( 2) 00:09:54.938 6762.124 - 6791.913: 0.2793% ( 2) 00:09:54.938 6791.913 - 6821.702: 0.2968% ( 2) 00:09:54.938 6821.702 - 6851.491: 0.3055% ( 1) 00:09:54.938 6851.491 - 6881.280: 0.3230% ( 2) 00:09:54.938 6881.280 - 6911.069: 0.3317% ( 1) 00:09:54.938 6911.069 - 6940.858: 0.3492% ( 2) 00:09:54.938 6940.858 - 6970.647: 0.3666% ( 2) 00:09:54.938 6970.647 - 7000.436: 0.3841% ( 2) 00:09:54.938 7000.436 - 7030.225: 0.3928% ( 1) 00:09:54.938 7030.225 - 7060.015: 0.4103% ( 2) 00:09:54.938 7060.015 - 7089.804: 0.4277% ( 2) 00:09:54.938 7089.804 - 7119.593: 0.4452% ( 2) 00:09:54.938 7119.593 - 7149.382: 0.4539% ( 1) 00:09:54.938 7149.382 - 7179.171: 0.4714% ( 2) 00:09:54.938 7179.171 - 7208.960: 0.4888% ( 2) 00:09:54.938 7208.960 - 7238.749: 0.5063% ( 2) 00:09:54.938 7238.749 - 7268.538: 0.5237% ( 2) 00:09:54.938 7268.538 - 7298.327: 0.5412% ( 2) 00:09:54.938 7298.327 - 7328.116: 0.5587% ( 2) 00:09:54.938 8460.102 - 8519.680: 0.6634% ( 12) 00:09:54.938 8519.680 - 8579.258: 0.7856% ( 14) 00:09:54.938 8579.258 - 8638.836: 0.9951% ( 24) 00:09:54.938 8638.836 - 8698.415: 1.2483% ( 29) 00:09:54.938 8698.415 - 8757.993: 1.4752% ( 26) 00:09:54.938 8757.993 - 8817.571: 1.7371% ( 30) 00:09:54.938 8817.571 - 8877.149: 2.0601% ( 37) 00:09:54.938 8877.149 - 8936.727: 2.3656% ( 35) 00:09:54.938 8936.727 - 8996.305: 2.7933% ( 49) 00:09:54.938 8996.305 - 9055.884: 3.2909% ( 57) 00:09:54.938 9055.884 - 9115.462: 3.8582% ( 65) 00:09:54.938 9115.462 - 9175.040: 4.4867% ( 72) 00:09:54.938 9175.040 - 9234.618: 5.0803% ( 68) 00:09:54.938 9234.618 - 9294.196: 5.7175% ( 73) 00:09:54.938 9294.196 - 9353.775: 6.4333% ( 82) 00:09:54.938 9353.775 - 9413.353: 7.2800% ( 97) 00:09:54.938 9413.353 - 9472.931: 8.1878% ( 104) 00:09:54.938 9472.931 - 9532.509: 9.4361% ( 143) 00:09:54.938 9532.509 - 9592.087: 10.8240% ( 159) 00:09:54.938 9592.087 - 9651.665: 12.5087% ( 193) 00:09:54.938 9651.665 - 9711.244: 14.1847% ( 192) 00:09:54.938 9711.244 - 9770.822: 15.9567% ( 203) 00:09:54.938 9770.822 - 9830.400: 17.7985% ( 211) 00:09:54.938 9830.400 - 9889.978: 19.8411% ( 234) 00:09:54.938 9889.978 - 9949.556: 22.0321% ( 251) 00:09:54.938 9949.556 - 10009.135: 24.2842% ( 258) 00:09:54.938 10009.135 - 10068.713: 26.6760% ( 274) 00:09:54.938 10068.713 - 10128.291: 29.0503% ( 272) 00:09:54.938 10128.291 - 10187.869: 31.4770% ( 278) 00:09:54.938 10187.869 - 10247.447: 33.9560% ( 284) 00:09:54.938 10247.447 - 10307.025: 36.4263% ( 283) 00:09:54.938 10307.025 - 10366.604: 38.9141% ( 285) 00:09:54.938 10366.604 - 10426.182: 41.4455% ( 290) 00:09:54.938 10426.182 - 10485.760: 44.0293% ( 296) 00:09:54.939 10485.760 - 10545.338: 46.7441% ( 311) 00:09:54.939 10545.338 - 10604.916: 49.1620% ( 277) 00:09:54.939 10604.916 - 10664.495: 51.5189% ( 270) 00:09:54.939 10664.495 - 10724.073: 53.6837% ( 248) 00:09:54.939 10724.073 - 10783.651: 55.8223% ( 245) 00:09:54.939 10783.651 - 10843.229: 57.7950% ( 226) 00:09:54.939 10843.229 - 10902.807: 59.7154% ( 220) 00:09:54.939 10902.807 - 10962.385: 61.4700% ( 201) 00:09:54.939 10962.385 - 11021.964: 63.1634% ( 194) 00:09:54.939 11021.964 - 11081.542: 64.7521% ( 182) 00:09:54.939 11081.542 - 11141.120: 66.1313% ( 158) 00:09:54.939 11141.120 - 11200.698: 67.4232% ( 148) 00:09:54.939 11200.698 - 11260.276: 68.6278% ( 138) 00:09:54.939 11260.276 - 11319.855: 69.7626% ( 130) 00:09:54.939 11319.855 - 11379.433: 70.7053% ( 108) 00:09:54.939 11379.433 - 11439.011: 71.5957% ( 102) 00:09:54.939 11439.011 - 11498.589: 72.5733% ( 112) 00:09:54.939 11498.589 - 11558.167: 73.4200% ( 97) 00:09:54.939 11558.167 - 11617.745: 74.2057% ( 90) 00:09:54.939 11617.745 - 11677.324: 74.9214% ( 82) 00:09:54.939 11677.324 - 11736.902: 75.5674% ( 74) 00:09:54.939 11736.902 - 11796.480: 76.1435% ( 66) 00:09:54.939 11796.480 - 11856.058: 76.7807% ( 73) 00:09:54.939 11856.058 - 11915.636: 77.4267% ( 74) 00:09:54.939 11915.636 - 11975.215: 78.0464% ( 71) 00:09:54.939 11975.215 - 12034.793: 78.6313% ( 67) 00:09:54.939 12034.793 - 12094.371: 79.2423% ( 70) 00:09:54.939 12094.371 - 12153.949: 79.8010% ( 64) 00:09:54.939 12153.949 - 12213.527: 80.3946% ( 68) 00:09:54.939 12213.527 - 12273.105: 81.0492% ( 75) 00:09:54.939 12273.105 - 12332.684: 81.7126% ( 76) 00:09:54.939 12332.684 - 12392.262: 82.3062% ( 68) 00:09:54.939 12392.262 - 12451.840: 82.9347% ( 72) 00:09:54.939 12451.840 - 12511.418: 83.5283% ( 68) 00:09:54.939 12511.418 - 12570.996: 84.1480% ( 71) 00:09:54.939 12570.996 - 12630.575: 84.7416% ( 68) 00:09:54.939 12630.575 - 12690.153: 85.3614% ( 71) 00:09:54.939 12690.153 - 12749.731: 85.9899% ( 72) 00:09:54.939 12749.731 - 12809.309: 86.6446% ( 75) 00:09:54.939 12809.309 - 12868.887: 87.1770% ( 61) 00:09:54.939 12868.887 - 12928.465: 87.7008% ( 60) 00:09:54.939 12928.465 - 12988.044: 88.2071% ( 58) 00:09:54.939 12988.044 - 13047.622: 88.6697% ( 53) 00:09:54.939 13047.622 - 13107.200: 89.1498% ( 55) 00:09:54.939 13107.200 - 13166.778: 89.6997% ( 63) 00:09:54.939 13166.778 - 13226.356: 90.2147% ( 59) 00:09:54.939 13226.356 - 13285.935: 90.7385% ( 60) 00:09:54.939 13285.935 - 13345.513: 91.2360% ( 57) 00:09:54.939 13345.513 - 13405.091: 91.7423% ( 58) 00:09:54.939 13405.091 - 13464.669: 92.2922% ( 63) 00:09:54.939 13464.669 - 13524.247: 92.7898% ( 57) 00:09:54.939 13524.247 - 13583.825: 93.2350% ( 51) 00:09:54.939 13583.825 - 13643.404: 93.6191% ( 44) 00:09:54.939 13643.404 - 13702.982: 93.9333% ( 36) 00:09:54.939 13702.982 - 13762.560: 94.1952% ( 30) 00:09:54.939 13762.560 - 13822.138: 94.4745% ( 32) 00:09:54.939 13822.138 - 13881.716: 94.7626% ( 33) 00:09:54.939 13881.716 - 13941.295: 95.0157% ( 29) 00:09:54.939 13941.295 - 14000.873: 95.2514% ( 27) 00:09:54.939 14000.873 - 14060.451: 95.4696% ( 25) 00:09:54.939 14060.451 - 14120.029: 95.6704% ( 23) 00:09:54.939 14120.029 - 14179.607: 95.8188% ( 17) 00:09:54.939 14179.607 - 14239.185: 96.0108% ( 22) 00:09:54.939 14239.185 - 14298.764: 96.1679% ( 18) 00:09:54.939 14298.764 - 14358.342: 96.3513% ( 21) 00:09:54.939 14358.342 - 14417.920: 96.4997% ( 17) 00:09:54.939 14417.920 - 14477.498: 96.6655% ( 19) 00:09:54.939 14477.498 - 14537.076: 96.7964% ( 15) 00:09:54.939 14537.076 - 14596.655: 96.9361% ( 16) 00:09:54.939 14596.655 - 14656.233: 97.0670% ( 15) 00:09:54.939 14656.233 - 14715.811: 97.2154% ( 17) 00:09:54.939 14715.811 - 14775.389: 97.3551% ( 16) 00:09:54.939 14775.389 - 14834.967: 97.4860% ( 15) 00:09:54.939 14834.967 - 14894.545: 97.5733% ( 10) 00:09:54.939 14894.545 - 14954.124: 97.6606% ( 10) 00:09:54.939 14954.124 - 15013.702: 97.7392% ( 9) 00:09:54.939 15013.702 - 15073.280: 97.8265% ( 10) 00:09:54.939 15073.280 - 15132.858: 97.8788% ( 6) 00:09:54.939 15132.858 - 15192.436: 97.9399% ( 7) 00:09:54.939 15192.436 - 15252.015: 97.9923% ( 6) 00:09:54.939 15252.015 - 15371.171: 98.0796% ( 10) 00:09:54.939 15371.171 - 15490.327: 98.1233% ( 5) 00:09:54.939 15490.327 - 15609.484: 98.1669% ( 5) 00:09:54.939 15609.484 - 15728.640: 98.2105% ( 5) 00:09:54.939 15728.640 - 15847.796: 98.3240% ( 13) 00:09:54.939 15847.796 - 15966.953: 98.4288% ( 12) 00:09:54.939 15966.953 - 16086.109: 98.4986% ( 8) 00:09:54.939 16086.109 - 16205.265: 98.5510% ( 6) 00:09:54.939 16205.265 - 16324.422: 98.6034% ( 6) 00:09:54.939 16324.422 - 16443.578: 98.6470% ( 5) 00:09:54.939 16443.578 - 16562.735: 98.6906% ( 5) 00:09:54.939 16562.735 - 16681.891: 98.7343% ( 5) 00:09:54.939 16681.891 - 16801.047: 98.7867% ( 6) 00:09:54.939 16801.047 - 16920.204: 98.8390% ( 6) 00:09:54.939 16920.204 - 17039.360: 98.8827% ( 5) 00:09:54.939 22639.709 - 22758.865: 98.9176% ( 4) 00:09:54.939 22758.865 - 22878.022: 98.9525% ( 4) 00:09:54.939 22878.022 - 22997.178: 98.9874% ( 4) 00:09:54.939 22997.178 - 23116.335: 99.0223% ( 4) 00:09:54.939 23116.335 - 23235.491: 99.0660% ( 5) 00:09:54.939 23235.491 - 23354.647: 99.1009% ( 4) 00:09:54.939 23354.647 - 23473.804: 99.1358% ( 4) 00:09:54.939 23473.804 - 23592.960: 99.1707% ( 4) 00:09:54.939 23592.960 - 23712.116: 99.2057% ( 4) 00:09:54.939 23712.116 - 23831.273: 99.2406% ( 4) 00:09:54.939 23831.273 - 23950.429: 99.2755% ( 4) 00:09:54.939 23950.429 - 24069.585: 99.3191% ( 5) 00:09:54.939 24069.585 - 24188.742: 99.3453% ( 3) 00:09:54.939 24188.742 - 24307.898: 99.3890% ( 5) 00:09:54.939 24307.898 - 24427.055: 99.4239% ( 4) 00:09:54.939 24427.055 - 24546.211: 99.4413% ( 2) 00:09:54.939 30027.404 - 30146.560: 99.4675% ( 3) 00:09:54.939 30146.560 - 30265.716: 99.5024% ( 4) 00:09:54.939 30265.716 - 30384.873: 99.5461% ( 5) 00:09:54.939 30384.873 - 30504.029: 99.5723% ( 3) 00:09:54.939 30504.029 - 30742.342: 99.6334% ( 7) 00:09:54.939 30742.342 - 30980.655: 99.7032% ( 8) 00:09:54.939 30980.655 - 31218.967: 99.7818% ( 9) 00:09:54.939 31218.967 - 31457.280: 99.8516% ( 8) 00:09:54.939 31457.280 - 31695.593: 99.9476% ( 11) 00:09:54.939 31695.593 - 31933.905: 100.0000% ( 6) 00:09:54.939 00:09:54.939 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:54.939 ============================================================================== 00:09:54.939 Range in us Cumulative IO count 00:09:54.939 5868.451 - 5898.240: 0.0175% ( 2) 00:09:54.939 5898.240 - 5928.029: 0.0349% ( 2) 00:09:54.939 5928.029 - 5957.818: 0.0524% ( 2) 00:09:54.939 5957.818 - 5987.607: 0.0698% ( 2) 00:09:54.939 5987.607 - 6017.396: 0.0873% ( 2) 00:09:54.939 6017.396 - 6047.185: 0.1047% ( 2) 00:09:54.939 6047.185 - 6076.975: 0.1135% ( 1) 00:09:54.939 6076.975 - 6106.764: 0.1309% ( 2) 00:09:54.939 6106.764 - 6136.553: 0.1484% ( 2) 00:09:54.939 6136.553 - 6166.342: 0.1659% ( 2) 00:09:54.939 6166.342 - 6196.131: 0.1833% ( 2) 00:09:54.939 6196.131 - 6225.920: 0.2008% ( 2) 00:09:54.939 6225.920 - 6255.709: 0.2182% ( 2) 00:09:54.939 6255.709 - 6285.498: 0.2357% ( 2) 00:09:54.939 6285.498 - 6315.287: 0.2531% ( 2) 00:09:54.939 6315.287 - 6345.076: 0.2706% ( 2) 00:09:54.939 6345.076 - 6374.865: 0.2793% ( 1) 00:09:54.939 6374.865 - 6404.655: 0.2968% ( 2) 00:09:54.939 6404.655 - 6434.444: 0.3142% ( 2) 00:09:54.939 6434.444 - 6464.233: 0.3317% ( 2) 00:09:54.939 6464.233 - 6494.022: 0.3492% ( 2) 00:09:54.939 6494.022 - 6523.811: 0.3666% ( 2) 00:09:54.939 6523.811 - 6553.600: 0.3753% ( 1) 00:09:54.939 6553.600 - 6583.389: 0.3928% ( 2) 00:09:54.939 6583.389 - 6613.178: 0.4103% ( 2) 00:09:54.939 6613.178 - 6642.967: 0.4277% ( 2) 00:09:54.939 6642.967 - 6672.756: 0.4452% ( 2) 00:09:54.939 6672.756 - 6702.545: 0.4626% ( 2) 00:09:54.939 6702.545 - 6732.335: 0.4714% ( 1) 00:09:54.939 6732.335 - 6762.124: 0.4888% ( 2) 00:09:54.939 6762.124 - 6791.913: 0.5063% ( 2) 00:09:54.939 6791.913 - 6821.702: 0.5237% ( 2) 00:09:54.939 6821.702 - 6851.491: 0.5325% ( 1) 00:09:54.939 6851.491 - 6881.280: 0.5499% ( 2) 00:09:54.939 6881.280 - 6911.069: 0.5587% ( 1) 00:09:54.939 8400.524 - 8460.102: 0.5674% ( 1) 00:09:54.939 8460.102 - 8519.680: 0.6896% ( 14) 00:09:54.939 8519.680 - 8579.258: 0.8293% ( 16) 00:09:54.939 8579.258 - 8638.836: 1.0737% ( 28) 00:09:54.939 8638.836 - 8698.415: 1.3966% ( 37) 00:09:54.939 8698.415 - 8757.993: 1.6498% ( 29) 00:09:54.939 8757.993 - 8817.571: 1.9378% ( 33) 00:09:54.939 8817.571 - 8877.149: 2.2434% ( 35) 00:09:54.939 8877.149 - 8936.727: 2.6449% ( 46) 00:09:54.939 8936.727 - 8996.305: 3.0552% ( 47) 00:09:54.939 8996.305 - 9055.884: 3.6313% ( 66) 00:09:54.939 9055.884 - 9115.462: 4.1638% ( 61) 00:09:54.939 9115.462 - 9175.040: 4.6962% ( 61) 00:09:54.939 9175.040 - 9234.618: 5.3509% ( 75) 00:09:54.939 9234.618 - 9294.196: 5.9532% ( 69) 00:09:54.939 9294.196 - 9353.775: 6.7126% ( 87) 00:09:54.939 9353.775 - 9413.353: 7.6030% ( 102) 00:09:54.939 9413.353 - 9472.931: 8.5807% ( 112) 00:09:54.939 9472.931 - 9532.509: 9.6718% ( 125) 00:09:54.939 9532.509 - 9592.087: 11.0946% ( 163) 00:09:54.939 9592.087 - 9651.665: 12.6135% ( 174) 00:09:54.939 9651.665 - 9711.244: 14.1498% ( 176) 00:09:54.939 9711.244 - 9770.822: 15.9131% ( 202) 00:09:54.939 9770.822 - 9830.400: 17.7462% ( 210) 00:09:54.939 9830.400 - 9889.978: 19.7102% ( 225) 00:09:54.939 9889.978 - 9949.556: 22.0147% ( 264) 00:09:54.939 9949.556 - 10009.135: 24.3017% ( 262) 00:09:54.939 10009.135 - 10068.713: 26.7022% ( 275) 00:09:54.939 10068.713 - 10128.291: 29.1288% ( 278) 00:09:54.939 10128.291 - 10187.869: 31.5119% ( 273) 00:09:54.939 10187.869 - 10247.447: 34.0084% ( 286) 00:09:54.939 10247.447 - 10307.025: 36.3303% ( 266) 00:09:54.939 10307.025 - 10366.604: 38.8268% ( 286) 00:09:54.939 10366.604 - 10426.182: 41.3146% ( 285) 00:09:54.939 10426.182 - 10485.760: 43.7675% ( 281) 00:09:54.939 10485.760 - 10545.338: 46.2902% ( 289) 00:09:54.939 10545.338 - 10604.916: 48.7517% ( 282) 00:09:54.939 10604.916 - 10664.495: 51.0911% ( 268) 00:09:54.940 10664.495 - 10724.073: 53.3781% ( 262) 00:09:54.940 10724.073 - 10783.651: 55.5080% ( 244) 00:09:54.940 10783.651 - 10843.229: 57.5419% ( 233) 00:09:54.940 10843.229 - 10902.807: 59.5147% ( 226) 00:09:54.940 10902.807 - 10962.385: 61.3478% ( 210) 00:09:54.940 10962.385 - 11021.964: 63.0936% ( 200) 00:09:54.940 11021.964 - 11081.542: 64.6037% ( 173) 00:09:54.940 11081.542 - 11141.120: 65.9916% ( 159) 00:09:54.940 11141.120 - 11200.698: 67.3010% ( 150) 00:09:54.940 11200.698 - 11260.276: 68.4532% ( 132) 00:09:54.940 11260.276 - 11319.855: 69.5007% ( 120) 00:09:54.940 11319.855 - 11379.433: 70.5133% ( 116) 00:09:54.940 11379.433 - 11439.011: 71.3687% ( 98) 00:09:54.940 11439.011 - 11498.589: 72.2765% ( 104) 00:09:54.940 11498.589 - 11558.167: 73.0796% ( 92) 00:09:54.940 11558.167 - 11617.745: 73.9438% ( 99) 00:09:54.940 11617.745 - 11677.324: 74.7643% ( 94) 00:09:54.940 11677.324 - 11736.902: 75.5237% ( 87) 00:09:54.940 11736.902 - 11796.480: 76.1173% ( 68) 00:09:54.940 11796.480 - 11856.058: 76.8156% ( 80) 00:09:54.940 11856.058 - 11915.636: 77.4354% ( 71) 00:09:54.940 11915.636 - 11975.215: 78.0639% ( 72) 00:09:54.940 11975.215 - 12034.793: 78.6837% ( 71) 00:09:54.940 12034.793 - 12094.371: 79.3471% ( 76) 00:09:54.940 12094.371 - 12153.949: 79.9581% ( 70) 00:09:54.940 12153.949 - 12213.527: 80.5691% ( 70) 00:09:54.940 12213.527 - 12273.105: 81.1714% ( 69) 00:09:54.940 12273.105 - 12332.684: 81.8348% ( 76) 00:09:54.940 12332.684 - 12392.262: 82.4546% ( 71) 00:09:54.940 12392.262 - 12451.840: 83.0831% ( 72) 00:09:54.940 12451.840 - 12511.418: 83.7029% ( 71) 00:09:54.940 12511.418 - 12570.996: 84.3226% ( 71) 00:09:54.940 12570.996 - 12630.575: 84.9686% ( 74) 00:09:54.940 12630.575 - 12690.153: 85.6145% ( 74) 00:09:54.940 12690.153 - 12749.731: 86.2954% ( 78) 00:09:54.940 12749.731 - 12809.309: 86.9501% ( 75) 00:09:54.940 12809.309 - 12868.887: 87.6309% ( 78) 00:09:54.940 12868.887 - 12928.465: 88.2507% ( 71) 00:09:54.940 12928.465 - 12988.044: 88.8792% ( 72) 00:09:54.940 12988.044 - 13047.622: 89.4204% ( 62) 00:09:54.940 13047.622 - 13107.200: 89.9878% ( 65) 00:09:54.940 13107.200 - 13166.778: 90.5115% ( 60) 00:09:54.940 13166.778 - 13226.356: 90.9654% ( 52) 00:09:54.940 13226.356 - 13285.935: 91.4106% ( 51) 00:09:54.940 13285.935 - 13345.513: 91.8645% ( 52) 00:09:54.940 13345.513 - 13405.091: 92.3097% ( 51) 00:09:54.940 13405.091 - 13464.669: 92.7025% ( 45) 00:09:54.940 13464.669 - 13524.247: 93.0604% ( 41) 00:09:54.940 13524.247 - 13583.825: 93.3747% ( 36) 00:09:54.940 13583.825 - 13643.404: 93.6540% ( 32) 00:09:54.940 13643.404 - 13702.982: 93.9159% ( 30) 00:09:54.940 13702.982 - 13762.560: 94.1603% ( 28) 00:09:54.940 13762.560 - 13822.138: 94.4047% ( 28) 00:09:54.940 13822.138 - 13881.716: 94.6142% ( 24) 00:09:54.940 13881.716 - 13941.295: 94.7888% ( 20) 00:09:54.940 13941.295 - 14000.873: 94.9546% ( 19) 00:09:54.940 14000.873 - 14060.451: 95.1466% ( 22) 00:09:54.940 14060.451 - 14120.029: 95.3300% ( 21) 00:09:54.940 14120.029 - 14179.607: 95.5045% ( 20) 00:09:54.940 14179.607 - 14239.185: 95.6704% ( 19) 00:09:54.940 14239.185 - 14298.764: 95.8799% ( 24) 00:09:54.940 14298.764 - 14358.342: 96.0545% ( 20) 00:09:54.940 14358.342 - 14417.920: 96.2465% ( 22) 00:09:54.940 14417.920 - 14477.498: 96.4124% ( 19) 00:09:54.940 14477.498 - 14537.076: 96.5608% ( 17) 00:09:54.940 14537.076 - 14596.655: 96.7441% ( 21) 00:09:54.940 14596.655 - 14656.233: 96.8663% ( 14) 00:09:54.940 14656.233 - 14715.811: 97.0059% ( 16) 00:09:54.940 14715.811 - 14775.389: 97.1369% ( 15) 00:09:54.940 14775.389 - 14834.967: 97.2765% ( 16) 00:09:54.940 14834.967 - 14894.545: 97.3900% ( 13) 00:09:54.940 14894.545 - 14954.124: 97.5035% ( 13) 00:09:54.940 14954.124 - 15013.702: 97.6082% ( 12) 00:09:54.940 15013.702 - 15073.280: 97.7043% ( 11) 00:09:54.940 15073.280 - 15132.858: 97.8090% ( 12) 00:09:54.940 15132.858 - 15192.436: 97.8788% ( 8) 00:09:54.940 15192.436 - 15252.015: 97.9399% ( 7) 00:09:54.940 15252.015 - 15371.171: 98.0534% ( 13) 00:09:54.940 15371.171 - 15490.327: 98.0971% ( 5) 00:09:54.940 15490.327 - 15609.484: 98.1407% ( 5) 00:09:54.940 15609.484 - 15728.640: 98.1931% ( 6) 00:09:54.940 15728.640 - 15847.796: 98.2891% ( 11) 00:09:54.940 15847.796 - 15966.953: 98.3764% ( 10) 00:09:54.940 15966.953 - 16086.109: 98.4550% ( 9) 00:09:54.940 16086.109 - 16205.265: 98.5161% ( 7) 00:09:54.940 16205.265 - 16324.422: 98.5684% ( 6) 00:09:54.940 16324.422 - 16443.578: 98.6208% ( 6) 00:09:54.940 16443.578 - 16562.735: 98.6645% ( 5) 00:09:54.940 16562.735 - 16681.891: 98.7168% ( 6) 00:09:54.940 16681.891 - 16801.047: 98.7692% ( 6) 00:09:54.940 16801.047 - 16920.204: 98.8216% ( 6) 00:09:54.940 16920.204 - 17039.360: 98.8652% ( 5) 00:09:54.940 17039.360 - 17158.516: 98.8827% ( 2) 00:09:54.940 21686.458 - 21805.615: 98.9089% ( 3) 00:09:54.940 21805.615 - 21924.771: 98.9351% ( 3) 00:09:54.940 21924.771 - 22043.927: 98.9787% ( 5) 00:09:54.940 22043.927 - 22163.084: 99.0223% ( 5) 00:09:54.940 22163.084 - 22282.240: 99.0573% ( 4) 00:09:54.940 22282.240 - 22401.396: 99.0922% ( 4) 00:09:54.940 22401.396 - 22520.553: 99.1271% ( 4) 00:09:54.940 22520.553 - 22639.709: 99.1620% ( 4) 00:09:54.940 22639.709 - 22758.865: 99.1969% ( 4) 00:09:54.940 22758.865 - 22878.022: 99.2318% ( 4) 00:09:54.940 22878.022 - 22997.178: 99.2668% ( 4) 00:09:54.940 22997.178 - 23116.335: 99.3017% ( 4) 00:09:54.940 23116.335 - 23235.491: 99.3453% ( 5) 00:09:54.940 23235.491 - 23354.647: 99.3715% ( 3) 00:09:54.940 23354.647 - 23473.804: 99.4064% ( 4) 00:09:54.940 23473.804 - 23592.960: 99.4413% ( 4) 00:09:54.940 28954.996 - 29074.153: 99.4501% ( 1) 00:09:54.940 29074.153 - 29193.309: 99.4937% ( 5) 00:09:54.940 29193.309 - 29312.465: 99.5374% ( 5) 00:09:54.940 29312.465 - 29431.622: 99.5897% ( 6) 00:09:54.940 29431.622 - 29550.778: 99.6247% ( 4) 00:09:54.940 29550.778 - 29669.935: 99.6770% ( 6) 00:09:54.940 29669.935 - 29789.091: 99.7119% ( 4) 00:09:54.940 29789.091 - 29908.247: 99.7556% ( 5) 00:09:54.940 29908.247 - 30027.404: 99.7992% ( 5) 00:09:54.940 30027.404 - 30146.560: 99.8341% ( 4) 00:09:54.940 30146.560 - 30265.716: 99.8865% ( 6) 00:09:54.940 30265.716 - 30384.873: 99.9302% ( 5) 00:09:54.940 30384.873 - 30504.029: 99.9738% ( 5) 00:09:54.940 30504.029 - 30742.342: 100.0000% ( 3) 00:09:54.940 00:09:54.940 17:16:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:56.330 Initializing NVMe Controllers 00:09:56.330 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:56.330 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:56.330 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:56.330 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:56.330 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:56.330 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:56.330 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:56.330 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:56.330 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:56.330 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:56.330 Initialization complete. Launching workers. 00:09:56.330 ======================================================== 00:09:56.330 Latency(us) 00:09:56.330 Device Information : IOPS MiB/s Average min max 00:09:56.330 PCIE (0000:00:10.0) NSID 1 from core 0: 10617.59 124.42 12067.80 8797.17 37291.05 00:09:56.330 PCIE (0000:00:11.0) NSID 1 from core 0: 10617.59 124.42 12049.55 8807.81 35884.97 00:09:56.330 PCIE (0000:00:13.0) NSID 1 from core 0: 10617.59 124.42 12031.87 8241.93 35632.02 00:09:56.330 PCIE (0000:00:12.0) NSID 1 from core 0: 10617.59 124.42 12013.71 8246.92 34554.37 00:09:56.330 PCIE (0000:00:12.0) NSID 2 from core 0: 10617.59 124.42 11993.94 7889.64 33521.42 00:09:56.330 PCIE (0000:00:12.0) NSID 3 from core 0: 10617.59 124.42 11974.22 7491.81 32234.02 00:09:56.330 ======================================================== 00:09:56.330 Total : 63705.52 746.55 12021.85 7491.81 37291.05 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9353.775us 00:09:56.330 10.00000% : 10366.604us 00:09:56.330 25.00000% : 10724.073us 00:09:56.330 50.00000% : 11617.745us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 13941.295us 00:09:56.330 95.00000% : 14477.498us 00:09:56.330 98.00000% : 15192.436us 00:09:56.330 99.00000% : 26333.556us 00:09:56.330 99.50000% : 35508.596us 00:09:56.330 99.90000% : 36938.473us 00:09:56.330 99.99000% : 37415.098us 00:09:56.330 99.99900% : 37415.098us 00:09:56.330 99.99990% : 37415.098us 00:09:56.330 99.99999% : 37415.098us 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9353.775us 00:09:56.330 10.00000% : 10485.760us 00:09:56.330 25.00000% : 10843.229us 00:09:56.330 50.00000% : 11558.167us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 14000.873us 00:09:56.330 95.00000% : 14417.920us 00:09:56.330 98.00000% : 14775.389us 00:09:56.330 99.00000% : 26214.400us 00:09:56.330 99.50000% : 34317.033us 00:09:56.330 99.90000% : 35746.909us 00:09:56.330 99.99000% : 35985.222us 00:09:56.330 99.99900% : 35985.222us 00:09:56.330 99.99990% : 35985.222us 00:09:56.330 99.99999% : 35985.222us 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9234.618us 00:09:56.330 10.00000% : 10426.182us 00:09:56.330 25.00000% : 10783.651us 00:09:56.330 50.00000% : 11558.167us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 14000.873us 00:09:56.330 95.00000% : 14358.342us 00:09:56.330 98.00000% : 14775.389us 00:09:56.330 99.00000% : 26214.400us 00:09:56.330 99.50000% : 34078.720us 00:09:56.330 99.90000% : 35508.596us 00:09:56.330 99.99000% : 35746.909us 00:09:56.330 99.99900% : 35746.909us 00:09:56.330 99.99990% : 35746.909us 00:09:56.330 99.99999% : 35746.909us 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9234.618us 00:09:56.330 10.00000% : 10426.182us 00:09:56.330 25.00000% : 10783.651us 00:09:56.330 50.00000% : 11558.167us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 13941.295us 00:09:56.330 95.00000% : 14358.342us 00:09:56.330 98.00000% : 14775.389us 00:09:56.330 99.00000% : 25380.305us 00:09:56.330 99.50000% : 32410.531us 00:09:56.330 99.90000% : 34317.033us 00:09:56.330 99.99000% : 34555.345us 00:09:56.330 99.99900% : 34555.345us 00:09:56.330 99.99990% : 34555.345us 00:09:56.330 99.99999% : 34555.345us 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9175.040us 00:09:56.330 10.00000% : 10426.182us 00:09:56.330 25.00000% : 10783.651us 00:09:56.330 50.00000% : 11558.167us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 13881.716us 00:09:56.330 95.00000% : 14298.764us 00:09:56.330 98.00000% : 14775.389us 00:09:56.330 99.00000% : 23950.429us 00:09:56.330 99.50000% : 31933.905us 00:09:56.330 99.90000% : 33363.782us 00:09:56.330 99.99000% : 33602.095us 00:09:56.330 99.99900% : 33602.095us 00:09:56.330 99.99990% : 33602.095us 00:09:56.330 99.99999% : 33602.095us 00:09:56.330 00:09:56.330 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:56.330 ================================================================================= 00:09:56.330 1.00000% : 9234.618us 00:09:56.330 10.00000% : 10426.182us 00:09:56.330 25.00000% : 10783.651us 00:09:56.330 50.00000% : 11558.167us 00:09:56.330 75.00000% : 12809.309us 00:09:56.330 90.00000% : 13881.716us 00:09:56.330 95.00000% : 14298.764us 00:09:56.330 98.00000% : 14715.811us 00:09:56.330 99.00000% : 23116.335us 00:09:56.330 99.50000% : 29908.247us 00:09:56.330 99.90000% : 31933.905us 00:09:56.330 99.99000% : 32410.531us 00:09:56.330 99.99900% : 32410.531us 00:09:56.330 99.99990% : 32410.531us 00:09:56.330 99.99999% : 32410.531us 00:09:56.330 00:09:56.330 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:56.330 ============================================================================== 00:09:56.330 Range in us Cumulative IO count 00:09:56.330 8757.993 - 8817.571: 0.0094% ( 1) 00:09:56.330 8817.571 - 8877.149: 0.0188% ( 1) 00:09:56.330 8877.149 - 8936.727: 0.0659% ( 5) 00:09:56.330 8936.727 - 8996.305: 0.1035% ( 4) 00:09:56.331 8996.305 - 9055.884: 0.1600% ( 6) 00:09:56.331 9055.884 - 9115.462: 0.3389% ( 19) 00:09:56.331 9115.462 - 9175.040: 0.4330% ( 10) 00:09:56.331 9175.040 - 9234.618: 0.5459% ( 12) 00:09:56.331 9234.618 - 9294.196: 0.7812% ( 25) 00:09:56.331 9294.196 - 9353.775: 1.1201% ( 36) 00:09:56.331 9353.775 - 9413.353: 1.3554% ( 25) 00:09:56.331 9413.353 - 9472.931: 1.4590% ( 11) 00:09:56.331 9472.931 - 9532.509: 1.5625% ( 11) 00:09:56.331 9532.509 - 9592.087: 1.6943% ( 14) 00:09:56.331 9592.087 - 9651.665: 1.7790% ( 9) 00:09:56.331 9651.665 - 9711.244: 1.9767% ( 21) 00:09:56.331 9711.244 - 9770.822: 2.1837% ( 22) 00:09:56.331 9770.822 - 9830.400: 2.4661% ( 30) 00:09:56.331 9830.400 - 9889.978: 3.1344% ( 71) 00:09:56.331 9889.978 - 9949.556: 3.5203% ( 41) 00:09:56.331 9949.556 - 10009.135: 3.9251% ( 43) 00:09:56.331 10009.135 - 10068.713: 4.4804% ( 59) 00:09:56.331 10068.713 - 10128.291: 5.1393% ( 70) 00:09:56.331 10128.291 - 10187.869: 6.2218% ( 115) 00:09:56.331 10187.869 - 10247.447: 7.5489% ( 141) 00:09:56.331 10247.447 - 10307.025: 9.0926% ( 164) 00:09:56.331 10307.025 - 10366.604: 10.9469% ( 197) 00:09:56.331 10366.604 - 10426.182: 13.1118% ( 230) 00:09:56.331 10426.182 - 10485.760: 15.7850% ( 284) 00:09:56.331 10485.760 - 10545.338: 18.6088% ( 300) 00:09:56.331 10545.338 - 10604.916: 20.9714% ( 251) 00:09:56.331 10604.916 - 10664.495: 23.3434% ( 252) 00:09:56.331 10664.495 - 10724.073: 25.4800% ( 227) 00:09:56.331 10724.073 - 10783.651: 27.2026% ( 183) 00:09:56.331 10783.651 - 10843.229: 28.6427% ( 153) 00:09:56.331 10843.229 - 10902.807: 30.4499% ( 192) 00:09:56.331 10902.807 - 10962.385: 32.5113% ( 219) 00:09:56.331 10962.385 - 11021.964: 34.1679% ( 176) 00:09:56.331 11021.964 - 11081.542: 36.2575% ( 222) 00:09:56.331 11081.542 - 11141.120: 37.9518% ( 180) 00:09:56.331 11141.120 - 11200.698: 39.5990% ( 175) 00:09:56.331 11200.698 - 11260.276: 41.4816% ( 200) 00:09:56.331 11260.276 - 11319.855: 43.1476% ( 177) 00:09:56.331 11319.855 - 11379.433: 44.8607% ( 182) 00:09:56.331 11379.433 - 11439.011: 46.3855% ( 162) 00:09:56.331 11439.011 - 11498.589: 47.6845% ( 138) 00:09:56.331 11498.589 - 11558.167: 48.9270% ( 132) 00:09:56.331 11558.167 - 11617.745: 50.0659% ( 121) 00:09:56.331 11617.745 - 11677.324: 51.0636% ( 106) 00:09:56.331 11677.324 - 11736.902: 51.9202% ( 91) 00:09:56.331 11736.902 - 11796.480: 52.7673% ( 90) 00:09:56.331 11796.480 - 11856.058: 53.5392% ( 82) 00:09:56.331 11856.058 - 11915.636: 54.5181% ( 104) 00:09:56.331 11915.636 - 11975.215: 56.2218% ( 181) 00:09:56.331 11975.215 - 12034.793: 57.5113% ( 137) 00:09:56.331 12034.793 - 12094.371: 58.7349% ( 130) 00:09:56.331 12094.371 - 12153.949: 60.0339% ( 138) 00:09:56.331 12153.949 - 12213.527: 61.3611% ( 141) 00:09:56.331 12213.527 - 12273.105: 62.8200% ( 155) 00:09:56.331 12273.105 - 12332.684: 64.3919% ( 167) 00:09:56.331 12332.684 - 12392.262: 65.9356% ( 164) 00:09:56.331 12392.262 - 12451.840: 67.4699% ( 163) 00:09:56.331 12451.840 - 12511.418: 68.8159% ( 143) 00:09:56.331 12511.418 - 12570.996: 69.9831% ( 124) 00:09:56.331 12570.996 - 12630.575: 71.2349% ( 133) 00:09:56.331 12630.575 - 12690.153: 72.6092% ( 146) 00:09:56.331 12690.153 - 12749.731: 73.7764% ( 124) 00:09:56.331 12749.731 - 12809.309: 75.0471% ( 135) 00:09:56.331 12809.309 - 12868.887: 76.3084% ( 134) 00:09:56.331 12868.887 - 12928.465: 77.4755% ( 124) 00:09:56.331 12928.465 - 12988.044: 78.6992% ( 130) 00:09:56.331 12988.044 - 13047.622: 79.7252% ( 109) 00:09:56.331 13047.622 - 13107.200: 80.6852% ( 102) 00:09:56.331 13107.200 - 13166.778: 81.3912% ( 75) 00:09:56.331 13166.778 - 13226.356: 82.0218% ( 67) 00:09:56.331 13226.356 - 13285.935: 82.8690% ( 90) 00:09:56.331 13285.935 - 13345.513: 83.4996% ( 67) 00:09:56.331 13345.513 - 13405.091: 84.0361% ( 57) 00:09:56.331 13405.091 - 13464.669: 84.7327% ( 74) 00:09:56.331 13464.669 - 13524.247: 85.4198% ( 73) 00:09:56.331 13524.247 - 13583.825: 86.1352% ( 76) 00:09:56.331 13583.825 - 13643.404: 86.7752% ( 68) 00:09:56.331 13643.404 - 13702.982: 87.4529% ( 72) 00:09:56.331 13702.982 - 13762.560: 88.2248% ( 82) 00:09:56.331 13762.560 - 13822.138: 89.0907% ( 92) 00:09:56.331 13822.138 - 13881.716: 89.7120% ( 66) 00:09:56.331 13881.716 - 13941.295: 90.4462% ( 78) 00:09:56.331 13941.295 - 14000.873: 91.0203% ( 61) 00:09:56.331 14000.873 - 14060.451: 91.7451% ( 77) 00:09:56.331 14060.451 - 14120.029: 92.4511% ( 75) 00:09:56.331 14120.029 - 14179.607: 93.0441% ( 63) 00:09:56.331 14179.607 - 14239.185: 93.5429% ( 53) 00:09:56.331 14239.185 - 14298.764: 94.0324% ( 52) 00:09:56.331 14298.764 - 14358.342: 94.4183% ( 41) 00:09:56.331 14358.342 - 14417.920: 94.8230% ( 43) 00:09:56.331 14417.920 - 14477.498: 95.1337% ( 33) 00:09:56.331 14477.498 - 14537.076: 95.3878% ( 27) 00:09:56.331 14537.076 - 14596.655: 95.6702% ( 30) 00:09:56.331 14596.655 - 14656.233: 95.9337% ( 28) 00:09:56.331 14656.233 - 14715.811: 96.1596% ( 24) 00:09:56.331 14715.811 - 14775.389: 96.4138% ( 27) 00:09:56.331 14775.389 - 14834.967: 96.6962% ( 30) 00:09:56.331 14834.967 - 14894.545: 96.9691% ( 29) 00:09:56.331 14894.545 - 14954.124: 97.1950% ( 24) 00:09:56.331 14954.124 - 15013.702: 97.4398% ( 26) 00:09:56.331 15013.702 - 15073.280: 97.7127% ( 29) 00:09:56.331 15073.280 - 15132.858: 97.9104% ( 21) 00:09:56.331 15132.858 - 15192.436: 98.0704% ( 17) 00:09:56.331 15192.436 - 15252.015: 98.2116% ( 15) 00:09:56.331 15252.015 - 15371.171: 98.3998% ( 20) 00:09:56.331 15371.171 - 15490.327: 98.4752% ( 8) 00:09:56.331 15490.327 - 15609.484: 98.5599% ( 9) 00:09:56.331 15609.484 - 15728.640: 98.5693% ( 1) 00:09:56.331 15728.640 - 15847.796: 98.6069% ( 4) 00:09:56.331 15847.796 - 15966.953: 98.6446% ( 4) 00:09:56.331 15966.953 - 16086.109: 98.6916% ( 5) 00:09:56.331 16086.109 - 16205.265: 98.7481% ( 6) 00:09:56.331 16205.265 - 16324.422: 98.7858% ( 4) 00:09:56.331 16324.422 - 16443.578: 98.7952% ( 1) 00:09:56.331 25380.305 - 25499.462: 98.8046% ( 1) 00:09:56.331 25499.462 - 25618.618: 98.8140% ( 1) 00:09:56.331 25618.618 - 25737.775: 98.8517% ( 4) 00:09:56.331 25737.775 - 25856.931: 98.8705% ( 2) 00:09:56.331 25856.931 - 25976.087: 98.9175% ( 5) 00:09:56.331 25976.087 - 26095.244: 98.9552% ( 4) 00:09:56.331 26095.244 - 26214.400: 98.9834% ( 3) 00:09:56.331 26214.400 - 26333.556: 99.0305% ( 5) 00:09:56.331 26333.556 - 26452.713: 99.0587% ( 3) 00:09:56.331 26452.713 - 26571.869: 99.0776% ( 2) 00:09:56.331 26571.869 - 26691.025: 99.1152% ( 4) 00:09:56.331 26691.025 - 26810.182: 99.1717% ( 6) 00:09:56.331 26810.182 - 26929.338: 99.2470% ( 8) 00:09:56.331 26929.338 - 27048.495: 99.3035% ( 6) 00:09:56.331 27048.495 - 27167.651: 99.3129% ( 1) 00:09:56.331 27167.651 - 27286.807: 99.3317% ( 2) 00:09:56.331 27286.807 - 27405.964: 99.3599% ( 3) 00:09:56.331 27405.964 - 27525.120: 99.3882% ( 3) 00:09:56.331 27525.120 - 27644.276: 99.3976% ( 1) 00:09:56.331 35031.971 - 35270.284: 99.4635% ( 7) 00:09:56.331 35270.284 - 35508.596: 99.5105% ( 5) 00:09:56.331 35508.596 - 35746.909: 99.5858% ( 8) 00:09:56.331 35746.909 - 35985.222: 99.6423% ( 6) 00:09:56.331 35985.222 - 36223.535: 99.7082% ( 7) 00:09:56.331 36223.535 - 36461.847: 99.7835% ( 8) 00:09:56.331 36461.847 - 36700.160: 99.8494% ( 7) 00:09:56.331 36700.160 - 36938.473: 99.9153% ( 7) 00:09:56.331 36938.473 - 37176.785: 99.9812% ( 7) 00:09:56.331 37176.785 - 37415.098: 100.0000% ( 2) 00:09:56.331 00:09:56.331 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:56.331 ============================================================================== 00:09:56.331 Range in us Cumulative IO count 00:09:56.331 8757.993 - 8817.571: 0.0094% ( 1) 00:09:56.331 8996.305 - 9055.884: 0.0282% ( 2) 00:09:56.331 9055.884 - 9115.462: 0.0941% ( 7) 00:09:56.331 9115.462 - 9175.040: 0.1788% ( 9) 00:09:56.331 9175.040 - 9234.618: 0.2918% ( 12) 00:09:56.331 9234.618 - 9294.196: 0.6024% ( 33) 00:09:56.331 9294.196 - 9353.775: 1.4590% ( 91) 00:09:56.331 9353.775 - 9413.353: 1.6002% ( 15) 00:09:56.331 9413.353 - 9472.931: 1.6849% ( 9) 00:09:56.331 9472.931 - 9532.509: 1.7319% ( 5) 00:09:56.331 9532.509 - 9592.087: 1.7508% ( 2) 00:09:56.331 9592.087 - 9651.665: 1.7790% ( 3) 00:09:56.331 9651.665 - 9711.244: 1.8166% ( 4) 00:09:56.331 9711.244 - 9770.822: 1.8355% ( 2) 00:09:56.331 9770.822 - 9830.400: 1.9484% ( 12) 00:09:56.331 9830.400 - 9889.978: 2.0990% ( 16) 00:09:56.331 9889.978 - 9949.556: 2.3155% ( 23) 00:09:56.331 9949.556 - 10009.135: 3.1250% ( 86) 00:09:56.331 10009.135 - 10068.713: 3.5580% ( 46) 00:09:56.331 10068.713 - 10128.291: 4.2451% ( 73) 00:09:56.331 10128.291 - 10187.869: 4.6969% ( 48) 00:09:56.331 10187.869 - 10247.447: 5.5817% ( 94) 00:09:56.331 10247.447 - 10307.025: 6.5230% ( 100) 00:09:56.331 10307.025 - 10366.604: 7.7654% ( 132) 00:09:56.331 10366.604 - 10426.182: 9.2244% ( 155) 00:09:56.331 10426.182 - 10485.760: 11.1728% ( 207) 00:09:56.331 10485.760 - 10545.338: 13.4507% ( 242) 00:09:56.331 10545.338 - 10604.916: 16.0486% ( 276) 00:09:56.331 10604.916 - 10664.495: 18.8912% ( 302) 00:09:56.331 10664.495 - 10724.073: 21.7150% ( 300) 00:09:56.331 10724.073 - 10783.651: 24.9718% ( 346) 00:09:56.331 10783.651 - 10843.229: 28.3038% ( 354) 00:09:56.331 10843.229 - 10902.807: 32.0030% ( 393) 00:09:56.331 10902.807 - 10962.385: 34.8550% ( 303) 00:09:56.331 10962.385 - 11021.964: 37.0105% ( 229) 00:09:56.331 11021.964 - 11081.542: 39.0060% ( 212) 00:09:56.331 11081.542 - 11141.120: 40.9639% ( 208) 00:09:56.332 11141.120 - 11200.698: 42.6111% ( 175) 00:09:56.332 11200.698 - 11260.276: 44.2583% ( 175) 00:09:56.332 11260.276 - 11319.855: 45.8490% ( 169) 00:09:56.332 11319.855 - 11379.433: 47.2327% ( 147) 00:09:56.332 11379.433 - 11439.011: 48.4469% ( 129) 00:09:56.332 11439.011 - 11498.589: 49.7176% ( 135) 00:09:56.332 11498.589 - 11558.167: 50.6777% ( 102) 00:09:56.332 11558.167 - 11617.745: 51.7319% ( 112) 00:09:56.332 11617.745 - 11677.324: 52.4567% ( 77) 00:09:56.332 11677.324 - 11736.902: 53.2568% ( 85) 00:09:56.332 11736.902 - 11796.480: 54.0851% ( 88) 00:09:56.332 11796.480 - 11856.058: 54.7534% ( 71) 00:09:56.332 11856.058 - 11915.636: 55.6194% ( 92) 00:09:56.332 11915.636 - 11975.215: 56.6359% ( 108) 00:09:56.332 11975.215 - 12034.793: 57.6619% ( 109) 00:09:56.332 12034.793 - 12094.371: 58.5749% ( 97) 00:09:56.332 12094.371 - 12153.949: 59.7044% ( 120) 00:09:56.332 12153.949 - 12213.527: 60.7775% ( 114) 00:09:56.332 12213.527 - 12273.105: 62.1329% ( 144) 00:09:56.332 12273.105 - 12332.684: 63.6578% ( 162) 00:09:56.332 12332.684 - 12392.262: 65.0697% ( 150) 00:09:56.332 12392.262 - 12451.840: 66.5569% ( 158) 00:09:56.332 12451.840 - 12511.418: 68.0911% ( 163) 00:09:56.332 12511.418 - 12570.996: 69.4936% ( 149) 00:09:56.332 12570.996 - 12630.575: 70.8020% ( 139) 00:09:56.332 12630.575 - 12690.153: 72.2139% ( 150) 00:09:56.332 12690.153 - 12749.731: 73.8046% ( 169) 00:09:56.332 12749.731 - 12809.309: 75.5271% ( 183) 00:09:56.332 12809.309 - 12868.887: 76.9202% ( 148) 00:09:56.332 12868.887 - 12928.465: 78.0120% ( 116) 00:09:56.332 12928.465 - 12988.044: 79.2451% ( 131) 00:09:56.332 12988.044 - 13047.622: 80.3181% ( 114) 00:09:56.332 13047.622 - 13107.200: 81.3535% ( 110) 00:09:56.332 13107.200 - 13166.778: 82.3795% ( 109) 00:09:56.332 13166.778 - 13226.356: 83.2925% ( 97) 00:09:56.332 13226.356 - 13285.935: 83.9326% ( 68) 00:09:56.332 13285.935 - 13345.513: 84.4785% ( 58) 00:09:56.332 13345.513 - 13405.091: 85.0151% ( 57) 00:09:56.332 13405.091 - 13464.669: 85.6175% ( 64) 00:09:56.332 13464.669 - 13524.247: 86.3611% ( 79) 00:09:56.332 13524.247 - 13583.825: 87.0011% ( 68) 00:09:56.332 13583.825 - 13643.404: 87.6412% ( 68) 00:09:56.332 13643.404 - 13702.982: 88.1495% ( 54) 00:09:56.332 13702.982 - 13762.560: 88.5825% ( 46) 00:09:56.332 13762.560 - 13822.138: 88.9401% ( 38) 00:09:56.332 13822.138 - 13881.716: 89.3355% ( 42) 00:09:56.332 13881.716 - 13941.295: 89.8249% ( 52) 00:09:56.332 13941.295 - 14000.873: 90.3991% ( 61) 00:09:56.332 14000.873 - 14060.451: 91.0392% ( 68) 00:09:56.332 14060.451 - 14120.029: 91.6792% ( 68) 00:09:56.332 14120.029 - 14179.607: 92.4793% ( 85) 00:09:56.332 14179.607 - 14239.185: 93.3358% ( 91) 00:09:56.332 14239.185 - 14298.764: 94.0606% ( 77) 00:09:56.332 14298.764 - 14358.342: 94.7477% ( 73) 00:09:56.332 14358.342 - 14417.920: 95.3125% ( 60) 00:09:56.332 14417.920 - 14477.498: 95.9149% ( 64) 00:09:56.332 14477.498 - 14537.076: 96.4420% ( 56) 00:09:56.332 14537.076 - 14596.655: 97.0162% ( 61) 00:09:56.332 14596.655 - 14656.233: 97.4868% ( 50) 00:09:56.332 14656.233 - 14715.811: 97.8351% ( 37) 00:09:56.332 14715.811 - 14775.389: 98.0798% ( 26) 00:09:56.332 14775.389 - 14834.967: 98.2869% ( 22) 00:09:56.332 14834.967 - 14894.545: 98.4375% ( 16) 00:09:56.332 14894.545 - 14954.124: 98.5222% ( 9) 00:09:56.332 14954.124 - 15013.702: 98.6163% ( 10) 00:09:56.332 15013.702 - 15073.280: 98.6822% ( 7) 00:09:56.332 15073.280 - 15132.858: 98.7105% ( 3) 00:09:56.332 15132.858 - 15192.436: 98.7481% ( 4) 00:09:56.332 15192.436 - 15252.015: 98.7669% ( 2) 00:09:56.332 15252.015 - 15371.171: 98.7952% ( 3) 00:09:56.332 25499.462 - 25618.618: 98.8046% ( 1) 00:09:56.332 25618.618 - 25737.775: 98.8422% ( 4) 00:09:56.332 25737.775 - 25856.931: 98.8799% ( 4) 00:09:56.332 25856.931 - 25976.087: 98.9270% ( 5) 00:09:56.332 25976.087 - 26095.244: 98.9646% ( 4) 00:09:56.332 26095.244 - 26214.400: 99.0117% ( 5) 00:09:56.332 26214.400 - 26333.556: 99.0587% ( 5) 00:09:56.332 26333.556 - 26452.713: 99.0964% ( 4) 00:09:56.332 26452.713 - 26571.869: 99.1434% ( 5) 00:09:56.332 26571.869 - 26691.025: 99.1717% ( 3) 00:09:56.332 26691.025 - 26810.182: 99.2188% ( 5) 00:09:56.332 26810.182 - 26929.338: 99.2658% ( 5) 00:09:56.332 26929.338 - 27048.495: 99.3129% ( 5) 00:09:56.332 27048.495 - 27167.651: 99.3505% ( 4) 00:09:56.332 27167.651 - 27286.807: 99.3976% ( 5) 00:09:56.332 33840.407 - 34078.720: 99.4352% ( 4) 00:09:56.332 34078.720 - 34317.033: 99.5105% ( 8) 00:09:56.332 34317.033 - 34555.345: 99.5858% ( 8) 00:09:56.332 34555.345 - 34793.658: 99.6517% ( 7) 00:09:56.332 34793.658 - 35031.971: 99.7270% ( 8) 00:09:56.332 35031.971 - 35270.284: 99.7929% ( 7) 00:09:56.332 35270.284 - 35508.596: 99.8776% ( 9) 00:09:56.332 35508.596 - 35746.909: 99.9529% ( 8) 00:09:56.332 35746.909 - 35985.222: 100.0000% ( 5) 00:09:56.332 00:09:56.332 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:56.332 ============================================================================== 00:09:56.332 Range in us Cumulative IO count 00:09:56.332 8221.789 - 8281.367: 0.0188% ( 2) 00:09:56.332 8281.367 - 8340.945: 0.0659% ( 5) 00:09:56.332 8340.945 - 8400.524: 0.1130% ( 5) 00:09:56.332 8400.524 - 8460.102: 0.1506% ( 4) 00:09:56.332 8460.102 - 8519.680: 0.1883% ( 4) 00:09:56.332 8519.680 - 8579.258: 0.2447% ( 6) 00:09:56.332 8579.258 - 8638.836: 0.3012% ( 6) 00:09:56.332 8638.836 - 8698.415: 0.3483% ( 5) 00:09:56.332 8698.415 - 8757.993: 0.3953% ( 5) 00:09:56.332 8757.993 - 8817.571: 0.4236% ( 3) 00:09:56.332 8817.571 - 8877.149: 0.4518% ( 3) 00:09:56.332 8877.149 - 8936.727: 0.4706% ( 2) 00:09:56.332 8936.727 - 8996.305: 0.5271% ( 6) 00:09:56.332 8996.305 - 9055.884: 0.6024% ( 8) 00:09:56.332 9055.884 - 9115.462: 0.6871% ( 9) 00:09:56.332 9115.462 - 9175.040: 0.7812% ( 10) 00:09:56.332 9175.040 - 9234.618: 1.0072% ( 24) 00:09:56.332 9234.618 - 9294.196: 1.0825% ( 8) 00:09:56.332 9294.196 - 9353.775: 1.1295% ( 5) 00:09:56.332 9353.775 - 9413.353: 1.1578% ( 3) 00:09:56.332 9413.353 - 9472.931: 1.2236% ( 7) 00:09:56.332 9472.931 - 9532.509: 1.2801% ( 6) 00:09:56.332 9532.509 - 9592.087: 1.3648% ( 9) 00:09:56.332 9592.087 - 9651.665: 1.5343% ( 18) 00:09:56.332 9651.665 - 9711.244: 1.7225% ( 20) 00:09:56.332 9711.244 - 9770.822: 1.9014% ( 19) 00:09:56.332 9770.822 - 9830.400: 2.4567% ( 59) 00:09:56.332 9830.400 - 9889.978: 2.9085% ( 48) 00:09:56.332 9889.978 - 9949.556: 3.2191% ( 33) 00:09:56.332 9949.556 - 10009.135: 3.4168% ( 21) 00:09:56.332 10009.135 - 10068.713: 3.7274% ( 33) 00:09:56.332 10068.713 - 10128.291: 4.0757% ( 37) 00:09:56.332 10128.291 - 10187.869: 4.6122% ( 57) 00:09:56.332 10187.869 - 10247.447: 5.3840% ( 82) 00:09:56.332 10247.447 - 10307.025: 6.6265% ( 132) 00:09:56.332 10307.025 - 10366.604: 8.1325% ( 160) 00:09:56.332 10366.604 - 10426.182: 10.0245% ( 201) 00:09:56.332 10426.182 - 10485.760: 11.8976% ( 199) 00:09:56.332 10485.760 - 10545.338: 13.9684% ( 220) 00:09:56.332 10545.338 - 10604.916: 16.4439% ( 263) 00:09:56.332 10604.916 - 10664.495: 19.5971% ( 335) 00:09:56.332 10664.495 - 10724.073: 23.6916% ( 435) 00:09:56.332 10724.073 - 10783.651: 26.9296% ( 344) 00:09:56.332 10783.651 - 10843.229: 29.5181% ( 275) 00:09:56.332 10843.229 - 10902.807: 32.5301% ( 320) 00:09:56.332 10902.807 - 10962.385: 34.7986% ( 241) 00:09:56.332 10962.385 - 11021.964: 37.0011% ( 234) 00:09:56.332 11021.964 - 11081.542: 38.5730% ( 167) 00:09:56.332 11081.542 - 11141.120: 40.0791% ( 160) 00:09:56.332 11141.120 - 11200.698: 41.4816% ( 149) 00:09:56.332 11200.698 - 11260.276: 42.9970% ( 161) 00:09:56.332 11260.276 - 11319.855: 44.4465% ( 154) 00:09:56.332 11319.855 - 11379.433: 45.9526% ( 160) 00:09:56.332 11379.433 - 11439.011: 47.2609% ( 139) 00:09:56.332 11439.011 - 11498.589: 48.6069% ( 143) 00:09:56.332 11498.589 - 11558.167: 50.3106% ( 181) 00:09:56.332 11558.167 - 11617.745: 51.8449% ( 163) 00:09:56.332 11617.745 - 11677.324: 53.0591% ( 129) 00:09:56.332 11677.324 - 11736.902: 54.1792% ( 119) 00:09:56.332 11736.902 - 11796.480: 54.9981% ( 87) 00:09:56.332 11796.480 - 11856.058: 55.6570% ( 70) 00:09:56.332 11856.058 - 11915.636: 56.4383% ( 83) 00:09:56.332 11915.636 - 11975.215: 57.3230% ( 94) 00:09:56.332 11975.215 - 12034.793: 58.1137% ( 84) 00:09:56.332 12034.793 - 12094.371: 58.9608% ( 90) 00:09:56.332 12094.371 - 12153.949: 59.9680% ( 107) 00:09:56.332 12153.949 - 12213.527: 61.1446% ( 125) 00:09:56.332 12213.527 - 12273.105: 62.3588% ( 129) 00:09:56.333 12273.105 - 12332.684: 63.6295% ( 135) 00:09:56.333 12332.684 - 12392.262: 65.1167% ( 158) 00:09:56.333 12392.262 - 12451.840: 66.5663% ( 154) 00:09:56.333 12451.840 - 12511.418: 68.1570% ( 169) 00:09:56.333 12511.418 - 12570.996: 69.7948% ( 174) 00:09:56.333 12570.996 - 12630.575: 71.6209% ( 194) 00:09:56.333 12630.575 - 12690.153: 73.2116% ( 169) 00:09:56.333 12690.153 - 12749.731: 74.7082% ( 159) 00:09:56.333 12749.731 - 12809.309: 76.3272% ( 172) 00:09:56.333 12809.309 - 12868.887: 77.4379% ( 118) 00:09:56.333 12868.887 - 12928.465: 78.6427% ( 128) 00:09:56.333 12928.465 - 12988.044: 79.7346% ( 116) 00:09:56.333 12988.044 - 13047.622: 80.7511% ( 108) 00:09:56.333 13047.622 - 13107.200: 81.7865% ( 110) 00:09:56.333 13107.200 - 13166.778: 82.6525% ( 92) 00:09:56.333 13166.778 - 13226.356: 83.5655% ( 97) 00:09:56.333 13226.356 - 13285.935: 84.2809% ( 76) 00:09:56.333 13285.935 - 13345.513: 84.8268% ( 58) 00:09:56.333 13345.513 - 13405.091: 85.4575% ( 67) 00:09:56.333 13405.091 - 13464.669: 85.8622% ( 43) 00:09:56.333 13464.669 - 13524.247: 86.3234% ( 49) 00:09:56.333 13524.247 - 13583.825: 86.8035% ( 51) 00:09:56.333 13583.825 - 13643.404: 87.1611% ( 38) 00:09:56.333 13643.404 - 13702.982: 87.6412% ( 51) 00:09:56.333 13702.982 - 13762.560: 88.0365% ( 42) 00:09:56.333 13762.560 - 13822.138: 88.6295% ( 63) 00:09:56.333 13822.138 - 13881.716: 89.1472% ( 55) 00:09:56.333 13881.716 - 13941.295: 89.7873% ( 68) 00:09:56.333 13941.295 - 14000.873: 90.6156% ( 88) 00:09:56.333 14000.873 - 14060.451: 91.5663% ( 101) 00:09:56.333 14060.451 - 14120.029: 92.3852% ( 87) 00:09:56.333 14120.029 - 14179.607: 93.3735% ( 105) 00:09:56.333 14179.607 - 14239.185: 94.0136% ( 68) 00:09:56.333 14239.185 - 14298.764: 94.8419% ( 88) 00:09:56.333 14298.764 - 14358.342: 95.4725% ( 67) 00:09:56.333 14358.342 - 14417.920: 96.0279% ( 59) 00:09:56.333 14417.920 - 14477.498: 96.5550% ( 56) 00:09:56.333 14477.498 - 14537.076: 96.9691% ( 44) 00:09:56.333 14537.076 - 14596.655: 97.2986% ( 35) 00:09:56.333 14596.655 - 14656.233: 97.5809% ( 30) 00:09:56.333 14656.233 - 14715.811: 97.8539% ( 29) 00:09:56.333 14715.811 - 14775.389: 98.1175% ( 28) 00:09:56.333 14775.389 - 14834.967: 98.3057% ( 20) 00:09:56.333 14834.967 - 14894.545: 98.4563% ( 16) 00:09:56.333 14894.545 - 14954.124: 98.5599% ( 11) 00:09:56.333 14954.124 - 15013.702: 98.6446% ( 9) 00:09:56.333 15013.702 - 15073.280: 98.6822% ( 4) 00:09:56.333 15073.280 - 15132.858: 98.7105% ( 3) 00:09:56.333 15132.858 - 15192.436: 98.7387% ( 3) 00:09:56.333 15192.436 - 15252.015: 98.7575% ( 2) 00:09:56.333 15252.015 - 15371.171: 98.7952% ( 4) 00:09:56.333 25737.775 - 25856.931: 98.8234% ( 3) 00:09:56.333 25856.931 - 25976.087: 98.8893% ( 7) 00:09:56.333 25976.087 - 26095.244: 98.9834% ( 10) 00:09:56.333 26095.244 - 26214.400: 99.0587% ( 8) 00:09:56.333 26214.400 - 26333.556: 99.1246% ( 7) 00:09:56.333 26333.556 - 26452.713: 99.1529% ( 3) 00:09:56.333 26452.713 - 26571.869: 99.1811% ( 3) 00:09:56.333 26571.869 - 26691.025: 99.2093% ( 3) 00:09:56.333 26691.025 - 26810.182: 99.2376% ( 3) 00:09:56.333 26810.182 - 26929.338: 99.2752% ( 4) 00:09:56.333 26929.338 - 27048.495: 99.3035% ( 3) 00:09:56.333 27048.495 - 27167.651: 99.3411% ( 4) 00:09:56.333 27167.651 - 27286.807: 99.3788% ( 4) 00:09:56.333 27286.807 - 27405.964: 99.3976% ( 2) 00:09:56.333 32648.844 - 32887.156: 99.4541% ( 6) 00:09:56.333 33602.095 - 33840.407: 99.4729% ( 2) 00:09:56.333 33840.407 - 34078.720: 99.5388% ( 7) 00:09:56.333 34078.720 - 34317.033: 99.5858% ( 5) 00:09:56.333 34317.033 - 34555.345: 99.6517% ( 7) 00:09:56.333 34555.345 - 34793.658: 99.7364% ( 9) 00:09:56.333 34793.658 - 35031.971: 99.8023% ( 7) 00:09:56.333 35031.971 - 35270.284: 99.8776% ( 8) 00:09:56.333 35270.284 - 35508.596: 99.9529% ( 8) 00:09:56.333 35508.596 - 35746.909: 100.0000% ( 5) 00:09:56.333 00:09:56.333 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:56.333 ============================================================================== 00:09:56.333 Range in us Cumulative IO count 00:09:56.333 8221.789 - 8281.367: 0.0282% ( 3) 00:09:56.333 8281.367 - 8340.945: 0.0471% ( 2) 00:09:56.333 8340.945 - 8400.524: 0.1130% ( 7) 00:09:56.333 8400.524 - 8460.102: 0.2071% ( 10) 00:09:56.333 8460.102 - 8519.680: 0.4142% ( 22) 00:09:56.333 8519.680 - 8579.258: 0.4706% ( 6) 00:09:56.333 8579.258 - 8638.836: 0.4989% ( 3) 00:09:56.333 8638.836 - 8698.415: 0.5365% ( 4) 00:09:56.333 8698.415 - 8757.993: 0.5553% ( 2) 00:09:56.333 8757.993 - 8817.571: 0.5836% ( 3) 00:09:56.333 8817.571 - 8877.149: 0.6118% ( 3) 00:09:56.333 8877.149 - 8936.727: 0.6212% ( 1) 00:09:56.333 8936.727 - 8996.305: 0.6589% ( 4) 00:09:56.333 8996.305 - 9055.884: 0.7154% ( 6) 00:09:56.333 9055.884 - 9115.462: 0.7624% ( 5) 00:09:56.333 9115.462 - 9175.040: 0.8283% ( 7) 00:09:56.333 9175.040 - 9234.618: 1.0542% ( 24) 00:09:56.333 9234.618 - 9294.196: 1.1107% ( 6) 00:09:56.333 9294.196 - 9353.775: 1.1483% ( 4) 00:09:56.333 9353.775 - 9413.353: 1.1672% ( 2) 00:09:56.333 9413.353 - 9472.931: 1.2236% ( 6) 00:09:56.333 9472.931 - 9532.509: 1.2895% ( 7) 00:09:56.333 9532.509 - 9592.087: 1.3460% ( 6) 00:09:56.333 9592.087 - 9651.665: 1.4684% ( 13) 00:09:56.333 9651.665 - 9711.244: 1.7508% ( 30) 00:09:56.333 9711.244 - 9770.822: 1.9202% ( 18) 00:09:56.333 9770.822 - 9830.400: 2.1273% ( 22) 00:09:56.333 9830.400 - 9889.978: 2.6261% ( 53) 00:09:56.333 9889.978 - 9949.556: 2.7956% ( 18) 00:09:56.333 9949.556 - 10009.135: 3.1156% ( 34) 00:09:56.333 10009.135 - 10068.713: 3.5015% ( 41) 00:09:56.333 10068.713 - 10128.291: 4.0286% ( 56) 00:09:56.333 10128.291 - 10187.869: 4.8852% ( 91) 00:09:56.333 10187.869 - 10247.447: 5.8453% ( 102) 00:09:56.333 10247.447 - 10307.025: 7.2289% ( 147) 00:09:56.333 10307.025 - 10366.604: 8.6785% ( 154) 00:09:56.333 10366.604 - 10426.182: 10.7492% ( 220) 00:09:56.333 10426.182 - 10485.760: 12.6977% ( 207) 00:09:56.333 10485.760 - 10545.338: 15.1073% ( 256) 00:09:56.333 10545.338 - 10604.916: 17.9217% ( 299) 00:09:56.333 10604.916 - 10664.495: 20.9055% ( 317) 00:09:56.333 10664.495 - 10724.073: 23.6163% ( 288) 00:09:56.333 10724.073 - 10783.651: 26.4684% ( 303) 00:09:56.333 10783.651 - 10843.229: 29.3016% ( 301) 00:09:56.333 10843.229 - 10902.807: 32.2101% ( 309) 00:09:56.333 10902.807 - 10962.385: 34.7609% ( 271) 00:09:56.333 10962.385 - 11021.964: 36.9447% ( 232) 00:09:56.333 11021.964 - 11081.542: 38.8742% ( 205) 00:09:56.333 11081.542 - 11141.120: 40.5309% ( 176) 00:09:56.333 11141.120 - 11200.698: 42.0275% ( 159) 00:09:56.333 11200.698 - 11260.276: 43.6747% ( 175) 00:09:56.333 11260.276 - 11319.855: 45.1431% ( 156) 00:09:56.333 11319.855 - 11379.433: 46.5173% ( 146) 00:09:56.333 11379.433 - 11439.011: 47.8539% ( 142) 00:09:56.333 11439.011 - 11498.589: 49.0023% ( 122) 00:09:56.333 11498.589 - 11558.167: 50.4142% ( 150) 00:09:56.333 11558.167 - 11617.745: 51.3742% ( 102) 00:09:56.333 11617.745 - 11677.324: 52.4755% ( 117) 00:09:56.333 11677.324 - 11736.902: 53.2568% ( 83) 00:09:56.333 11736.902 - 11796.480: 54.0851% ( 88) 00:09:56.333 11796.480 - 11856.058: 54.8099% ( 77) 00:09:56.333 11856.058 - 11915.636: 55.5817% ( 82) 00:09:56.333 11915.636 - 11975.215: 56.4477% ( 92) 00:09:56.333 11975.215 - 12034.793: 57.2760% ( 88) 00:09:56.333 12034.793 - 12094.371: 58.3584% ( 115) 00:09:56.333 12094.371 - 12153.949: 59.4691% ( 118) 00:09:56.333 12153.949 - 12213.527: 60.4669% ( 106) 00:09:56.333 12213.527 - 12273.105: 61.6058% ( 121) 00:09:56.333 12273.105 - 12332.684: 63.0459% ( 153) 00:09:56.333 12332.684 - 12392.262: 64.6084% ( 166) 00:09:56.333 12392.262 - 12451.840: 66.0486% ( 153) 00:09:56.333 12451.840 - 12511.418: 67.4511% ( 149) 00:09:56.333 12511.418 - 12570.996: 68.9571% ( 160) 00:09:56.333 12570.996 - 12630.575: 70.5478% ( 169) 00:09:56.333 12630.575 - 12690.153: 72.2986% ( 186) 00:09:56.333 12690.153 - 12749.731: 73.8517% ( 165) 00:09:56.333 12749.731 - 12809.309: 75.6683% ( 193) 00:09:56.333 12809.309 - 12868.887: 77.3814% ( 182) 00:09:56.333 12868.887 - 12928.465: 78.8309% ( 154) 00:09:56.333 12928.465 - 12988.044: 80.0358% ( 128) 00:09:56.333 12988.044 - 13047.622: 81.2123% ( 125) 00:09:56.333 13047.622 - 13107.200: 82.2007% ( 105) 00:09:56.333 13107.200 - 13166.778: 83.1890% ( 105) 00:09:56.333 13166.778 - 13226.356: 83.9232% ( 78) 00:09:56.333 13226.356 - 13285.935: 84.6197% ( 74) 00:09:56.333 13285.935 - 13345.513: 85.4669% ( 90) 00:09:56.333 13345.513 - 13405.091: 86.0599% ( 63) 00:09:56.333 13405.091 - 13464.669: 86.4834% ( 45) 00:09:56.333 13464.669 - 13524.247: 87.0294% ( 58) 00:09:56.333 13524.247 - 13583.825: 87.5188% ( 52) 00:09:56.333 13583.825 - 13643.404: 87.9424% ( 45) 00:09:56.333 13643.404 - 13702.982: 88.3001% ( 38) 00:09:56.333 13702.982 - 13762.560: 88.6766% ( 40) 00:09:56.333 13762.560 - 13822.138: 89.2790% ( 64) 00:09:56.333 13822.138 - 13881.716: 89.9191% ( 68) 00:09:56.333 13881.716 - 13941.295: 90.6344% ( 76) 00:09:56.333 13941.295 - 14000.873: 91.3968% ( 81) 00:09:56.333 14000.873 - 14060.451: 92.2534% ( 91) 00:09:56.333 14060.451 - 14120.029: 93.1947% ( 100) 00:09:56.333 14120.029 - 14179.607: 93.9571% ( 81) 00:09:56.333 14179.607 - 14239.185: 94.4371% ( 51) 00:09:56.333 14239.185 - 14298.764: 94.9925% ( 59) 00:09:56.333 14298.764 - 14358.342: 95.4913% ( 53) 00:09:56.333 14358.342 - 14417.920: 96.1314% ( 68) 00:09:56.333 14417.920 - 14477.498: 96.6020% ( 50) 00:09:56.333 14477.498 - 14537.076: 97.0350% ( 46) 00:09:56.333 14537.076 - 14596.655: 97.4021% ( 39) 00:09:56.333 14596.655 - 14656.233: 97.6562% ( 27) 00:09:56.333 14656.233 - 14715.811: 97.8727% ( 23) 00:09:56.333 14715.811 - 14775.389: 98.0704% ( 21) 00:09:56.333 14775.389 - 14834.967: 98.2869% ( 23) 00:09:56.333 14834.967 - 14894.545: 98.3998% ( 12) 00:09:56.333 14894.545 - 14954.124: 98.4846% ( 9) 00:09:56.334 14954.124 - 15013.702: 98.5410% ( 6) 00:09:56.334 15013.702 - 15073.280: 98.5881% ( 5) 00:09:56.334 15073.280 - 15132.858: 98.6352% ( 5) 00:09:56.334 15132.858 - 15192.436: 98.6728% ( 4) 00:09:56.334 15192.436 - 15252.015: 98.7199% ( 5) 00:09:56.334 15252.015 - 15371.171: 98.7669% ( 5) 00:09:56.334 15371.171 - 15490.327: 98.7952% ( 3) 00:09:56.334 24903.680 - 25022.836: 98.8046% ( 1) 00:09:56.334 25022.836 - 25141.993: 98.8799% ( 8) 00:09:56.334 25141.993 - 25261.149: 98.9458% ( 7) 00:09:56.334 25261.149 - 25380.305: 99.0211% ( 8) 00:09:56.334 25380.305 - 25499.462: 99.1058% ( 9) 00:09:56.334 25499.462 - 25618.618: 99.1340% ( 3) 00:09:56.334 25618.618 - 25737.775: 99.1623% ( 3) 00:09:56.334 25737.775 - 25856.931: 99.1999% ( 4) 00:09:56.334 25856.931 - 25976.087: 99.2282% ( 3) 00:09:56.334 25976.087 - 26095.244: 99.2564% ( 3) 00:09:56.334 26095.244 - 26214.400: 99.2941% ( 4) 00:09:56.334 26214.400 - 26333.556: 99.3223% ( 3) 00:09:56.334 26333.556 - 26452.713: 99.3505% ( 3) 00:09:56.334 26452.713 - 26571.869: 99.3788% ( 3) 00:09:56.334 26571.869 - 26691.025: 99.3976% ( 2) 00:09:56.334 31933.905 - 32172.218: 99.4823% ( 9) 00:09:56.334 32172.218 - 32410.531: 99.5576% ( 8) 00:09:56.334 32410.531 - 32648.844: 99.6141% ( 6) 00:09:56.334 32887.156 - 33125.469: 99.6329% ( 2) 00:09:56.334 33125.469 - 33363.782: 99.6894% ( 6) 00:09:56.334 33363.782 - 33602.095: 99.7647% ( 8) 00:09:56.334 33602.095 - 33840.407: 99.8306% ( 7) 00:09:56.334 33840.407 - 34078.720: 99.8776% ( 5) 00:09:56.334 34078.720 - 34317.033: 99.9341% ( 6) 00:09:56.334 34317.033 - 34555.345: 100.0000% ( 7) 00:09:56.334 00:09:56.334 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:56.334 ============================================================================== 00:09:56.334 Range in us Cumulative IO count 00:09:56.334 7864.320 - 7923.898: 0.0282% ( 3) 00:09:56.334 7923.898 - 7983.476: 0.0753% ( 5) 00:09:56.334 7983.476 - 8043.055: 0.1224% ( 5) 00:09:56.334 8043.055 - 8102.633: 0.2730% ( 16) 00:09:56.334 8102.633 - 8162.211: 0.4800% ( 22) 00:09:56.334 8162.211 - 8221.789: 0.5083% ( 3) 00:09:56.334 8221.789 - 8281.367: 0.5271% ( 2) 00:09:56.334 8281.367 - 8340.945: 0.5553% ( 3) 00:09:56.334 8340.945 - 8400.524: 0.5836% ( 3) 00:09:56.334 8400.524 - 8460.102: 0.6024% ( 2) 00:09:56.334 8817.571 - 8877.149: 0.6306% ( 3) 00:09:56.334 8877.149 - 8936.727: 0.6777% ( 5) 00:09:56.334 8936.727 - 8996.305: 0.7342% ( 6) 00:09:56.334 8996.305 - 9055.884: 0.8095% ( 8) 00:09:56.334 9055.884 - 9115.462: 0.9413% ( 14) 00:09:56.334 9115.462 - 9175.040: 1.0166% ( 8) 00:09:56.334 9175.040 - 9234.618: 1.0636% ( 5) 00:09:56.334 9234.618 - 9294.196: 1.1766% ( 12) 00:09:56.334 9294.196 - 9353.775: 1.2236% ( 5) 00:09:56.334 9353.775 - 9413.353: 1.2895% ( 7) 00:09:56.334 9413.353 - 9472.931: 1.6002% ( 33) 00:09:56.334 9472.931 - 9532.509: 1.7131% ( 12) 00:09:56.334 9532.509 - 9592.087: 1.7602% ( 5) 00:09:56.334 9592.087 - 9651.665: 1.8166% ( 6) 00:09:56.334 9651.665 - 9711.244: 1.9296% ( 12) 00:09:56.334 9711.244 - 9770.822: 2.0520% ( 13) 00:09:56.334 9770.822 - 9830.400: 2.3532% ( 32) 00:09:56.334 9830.400 - 9889.978: 2.5038% ( 16) 00:09:56.334 9889.978 - 9949.556: 2.7956% ( 31) 00:09:56.334 9949.556 - 10009.135: 3.0309% ( 25) 00:09:56.334 10009.135 - 10068.713: 3.4074% ( 40) 00:09:56.334 10068.713 - 10128.291: 4.1698% ( 81) 00:09:56.334 10128.291 - 10187.869: 5.1299% ( 102) 00:09:56.334 10187.869 - 10247.447: 6.1747% ( 111) 00:09:56.334 10247.447 - 10307.025: 7.5678% ( 148) 00:09:56.334 10307.025 - 10366.604: 8.8385% ( 135) 00:09:56.334 10366.604 - 10426.182: 10.4198% ( 168) 00:09:56.334 10426.182 - 10485.760: 12.0576% ( 174) 00:09:56.334 10485.760 - 10545.338: 14.2319% ( 231) 00:09:56.334 10545.338 - 10604.916: 17.0463% ( 299) 00:09:56.334 10604.916 - 10664.495: 20.1242% ( 327) 00:09:56.334 10664.495 - 10724.073: 23.1551% ( 322) 00:09:56.334 10724.073 - 10783.651: 26.4778% ( 353) 00:09:56.334 10783.651 - 10843.229: 29.6216% ( 334) 00:09:56.334 10843.229 - 10902.807: 32.5489% ( 311) 00:09:56.334 10902.807 - 10962.385: 35.6928% ( 334) 00:09:56.334 10962.385 - 11021.964: 37.6600% ( 209) 00:09:56.334 11021.964 - 11081.542: 39.5331% ( 199) 00:09:56.334 11081.542 - 11141.120: 41.3215% ( 190) 00:09:56.334 11141.120 - 11200.698: 42.8370% ( 161) 00:09:56.334 11200.698 - 11260.276: 44.1642% ( 141) 00:09:56.334 11260.276 - 11319.855: 45.5949% ( 152) 00:09:56.334 11319.855 - 11379.433: 46.7244% ( 120) 00:09:56.334 11379.433 - 11439.011: 47.9951% ( 135) 00:09:56.334 11439.011 - 11498.589: 49.2470% ( 133) 00:09:56.334 11498.589 - 11558.167: 50.1035% ( 91) 00:09:56.334 11558.167 - 11617.745: 50.9413% ( 89) 00:09:56.334 11617.745 - 11677.324: 51.7978% ( 91) 00:09:56.334 11677.324 - 11736.902: 52.6167% ( 87) 00:09:56.334 11736.902 - 11796.480: 53.4074% ( 84) 00:09:56.334 11796.480 - 11856.058: 54.3298% ( 98) 00:09:56.334 11856.058 - 11915.636: 54.9981% ( 71) 00:09:56.334 11915.636 - 11975.215: 55.8076% ( 86) 00:09:56.334 11975.215 - 12034.793: 56.7206% ( 97) 00:09:56.334 12034.793 - 12094.371: 57.5301% ( 86) 00:09:56.334 12094.371 - 12153.949: 58.5184% ( 105) 00:09:56.334 12153.949 - 12213.527: 59.7044% ( 126) 00:09:56.334 12213.527 - 12273.105: 60.8716% ( 124) 00:09:56.334 12273.105 - 12332.684: 62.3776% ( 160) 00:09:56.334 12332.684 - 12392.262: 64.2131% ( 195) 00:09:56.334 12392.262 - 12451.840: 66.0109% ( 191) 00:09:56.334 12451.840 - 12511.418: 67.8840% ( 199) 00:09:56.334 12511.418 - 12570.996: 69.5501% ( 177) 00:09:56.334 12570.996 - 12630.575: 71.2161% ( 177) 00:09:56.334 12630.575 - 12690.153: 73.0233% ( 192) 00:09:56.334 12690.153 - 12749.731: 74.5670% ( 164) 00:09:56.334 12749.731 - 12809.309: 75.9977% ( 152) 00:09:56.334 12809.309 - 12868.887: 77.6355% ( 174) 00:09:56.334 12868.887 - 12928.465: 78.9251% ( 137) 00:09:56.334 12928.465 - 12988.044: 80.0734% ( 122) 00:09:56.334 12988.044 - 13047.622: 81.1559% ( 115) 00:09:56.334 13047.622 - 13107.200: 82.4925% ( 142) 00:09:56.334 13107.200 - 13166.778: 83.4526% ( 102) 00:09:56.334 13166.778 - 13226.356: 83.9703% ( 55) 00:09:56.334 13226.356 - 13285.935: 84.5444% ( 61) 00:09:56.334 13285.935 - 13345.513: 85.1374% ( 63) 00:09:56.334 13345.513 - 13405.091: 85.7116% ( 61) 00:09:56.334 13405.091 - 13464.669: 86.8317% ( 119) 00:09:56.334 13464.669 - 13524.247: 87.4153% ( 62) 00:09:56.334 13524.247 - 13583.825: 87.8200% ( 43) 00:09:56.334 13583.825 - 13643.404: 88.1965% ( 40) 00:09:56.334 13643.404 - 13702.982: 88.6766% ( 51) 00:09:56.334 13702.982 - 13762.560: 89.2131% ( 57) 00:09:56.334 13762.560 - 13822.138: 89.6367% ( 45) 00:09:56.334 13822.138 - 13881.716: 90.1167% ( 51) 00:09:56.334 13881.716 - 13941.295: 90.7191% ( 64) 00:09:56.334 13941.295 - 14000.873: 91.4910% ( 82) 00:09:56.334 14000.873 - 14060.451: 92.3852% ( 95) 00:09:56.334 14060.451 - 14120.029: 93.0911% ( 75) 00:09:56.334 14120.029 - 14179.607: 93.8159% ( 77) 00:09:56.334 14179.607 - 14239.185: 94.6066% ( 84) 00:09:56.334 14239.185 - 14298.764: 95.3502% ( 79) 00:09:56.334 14298.764 - 14358.342: 95.8396% ( 52) 00:09:56.334 14358.342 - 14417.920: 96.2726% ( 46) 00:09:56.334 14417.920 - 14477.498: 96.7903% ( 55) 00:09:56.334 14477.498 - 14537.076: 97.1291% ( 36) 00:09:56.334 14537.076 - 14596.655: 97.4209% ( 31) 00:09:56.334 14596.655 - 14656.233: 97.6657% ( 26) 00:09:56.334 14656.233 - 14715.811: 97.9010% ( 25) 00:09:56.334 14715.811 - 14775.389: 98.1551% ( 27) 00:09:56.334 14775.389 - 14834.967: 98.3528% ( 21) 00:09:56.334 14834.967 - 14894.545: 98.4752% ( 13) 00:09:56.334 14894.545 - 14954.124: 98.5787% ( 11) 00:09:56.334 14954.124 - 15013.702: 98.6540% ( 8) 00:09:56.334 15013.702 - 15073.280: 98.7011% ( 5) 00:09:56.334 15073.280 - 15132.858: 98.7387% ( 4) 00:09:56.334 15132.858 - 15192.436: 98.7575% ( 2) 00:09:56.334 15192.436 - 15252.015: 98.7764% ( 2) 00:09:56.334 15252.015 - 15371.171: 98.7952% ( 2) 00:09:56.334 23473.804 - 23592.960: 98.8046% ( 1) 00:09:56.334 23592.960 - 23712.116: 98.8611% ( 6) 00:09:56.334 23712.116 - 23831.273: 98.9270% ( 7) 00:09:56.334 23831.273 - 23950.429: 99.0023% ( 8) 00:09:56.334 23950.429 - 24069.585: 99.0776% ( 8) 00:09:56.334 24069.585 - 24188.742: 99.1529% ( 8) 00:09:56.334 24188.742 - 24307.898: 99.1905% ( 4) 00:09:56.334 24307.898 - 24427.055: 99.2188% ( 3) 00:09:56.334 24427.055 - 24546.211: 99.2470% ( 3) 00:09:56.334 24546.211 - 24665.367: 99.2752% ( 3) 00:09:56.334 24665.367 - 24784.524: 99.3035% ( 3) 00:09:56.335 24784.524 - 24903.680: 99.3317% ( 3) 00:09:56.335 24903.680 - 25022.836: 99.3505% ( 2) 00:09:56.335 25022.836 - 25141.993: 99.3788% ( 3) 00:09:56.335 25141.993 - 25261.149: 99.3976% ( 2) 00:09:56.335 30504.029 - 30742.342: 99.4729% ( 8) 00:09:56.335 31218.967 - 31457.280: 99.4823% ( 1) 00:09:56.335 31457.280 - 31695.593: 99.4917% ( 1) 00:09:56.335 31695.593 - 31933.905: 99.5482% ( 6) 00:09:56.335 31933.905 - 32172.218: 99.6047% ( 6) 00:09:56.335 32172.218 - 32410.531: 99.6800% ( 8) 00:09:56.335 32410.531 - 32648.844: 99.7364% ( 6) 00:09:56.335 32648.844 - 32887.156: 99.8117% ( 8) 00:09:56.335 32887.156 - 33125.469: 99.8776% ( 7) 00:09:56.335 33125.469 - 33363.782: 99.9529% ( 8) 00:09:56.335 33363.782 - 33602.095: 100.0000% ( 5) 00:09:56.335 00:09:56.335 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:56.335 ============================================================================== 00:09:56.335 Range in us Cumulative IO count 00:09:56.335 7477.062 - 7506.851: 0.0094% ( 1) 00:09:56.335 7626.007 - 7685.585: 0.0471% ( 4) 00:09:56.335 7685.585 - 7745.164: 0.1130% ( 7) 00:09:56.335 7745.164 - 7804.742: 0.3106% ( 21) 00:09:56.335 7804.742 - 7864.320: 0.4612% ( 16) 00:09:56.335 7864.320 - 7923.898: 0.4989% ( 4) 00:09:56.335 7923.898 - 7983.476: 0.5083% ( 1) 00:09:56.335 7983.476 - 8043.055: 0.5365% ( 3) 00:09:56.335 8043.055 - 8102.633: 0.5553% ( 2) 00:09:56.335 8102.633 - 8162.211: 0.5648% ( 1) 00:09:56.335 8162.211 - 8221.789: 0.5836% ( 2) 00:09:56.335 8221.789 - 8281.367: 0.6024% ( 2) 00:09:56.335 8877.149 - 8936.727: 0.6118% ( 1) 00:09:56.335 8936.727 - 8996.305: 0.6212% ( 1) 00:09:56.335 8996.305 - 9055.884: 0.6683% ( 5) 00:09:56.335 9055.884 - 9115.462: 0.7059% ( 4) 00:09:56.335 9115.462 - 9175.040: 0.9319% ( 24) 00:09:56.335 9175.040 - 9234.618: 1.1107% ( 19) 00:09:56.335 9234.618 - 9294.196: 1.1578% ( 5) 00:09:56.335 9294.196 - 9353.775: 1.2236% ( 7) 00:09:56.335 9353.775 - 9413.353: 1.2801% ( 6) 00:09:56.335 9413.353 - 9472.931: 1.3837% ( 11) 00:09:56.335 9472.931 - 9532.509: 1.6660% ( 30) 00:09:56.335 9532.509 - 9592.087: 1.7508% ( 9) 00:09:56.335 9592.087 - 9651.665: 1.7884% ( 4) 00:09:56.335 9651.665 - 9711.244: 1.9014% ( 12) 00:09:56.335 9711.244 - 9770.822: 2.0708% ( 18) 00:09:56.335 9770.822 - 9830.400: 2.3532% ( 30) 00:09:56.335 9830.400 - 9889.978: 2.5132% ( 17) 00:09:56.335 9889.978 - 9949.556: 3.3038% ( 84) 00:09:56.335 9949.556 - 10009.135: 3.6239% ( 34) 00:09:56.335 10009.135 - 10068.713: 3.9816% ( 38) 00:09:56.335 10068.713 - 10128.291: 4.5369% ( 59) 00:09:56.335 10128.291 - 10187.869: 5.2711% ( 78) 00:09:56.335 10187.869 - 10247.447: 6.2688% ( 106) 00:09:56.335 10247.447 - 10307.025: 7.1254% ( 91) 00:09:56.335 10307.025 - 10366.604: 8.5561% ( 152) 00:09:56.335 10366.604 - 10426.182: 10.4104% ( 197) 00:09:56.335 10426.182 - 10485.760: 12.1235% ( 182) 00:09:56.335 10485.760 - 10545.338: 14.2978% ( 231) 00:09:56.335 10545.338 - 10604.916: 17.4040% ( 330) 00:09:56.335 10604.916 - 10664.495: 20.6890% ( 349) 00:09:56.335 10664.495 - 10724.073: 23.5505% ( 304) 00:09:56.335 10724.073 - 10783.651: 26.5343% ( 317) 00:09:56.335 10783.651 - 10843.229: 29.1227% ( 275) 00:09:56.335 10843.229 - 10902.807: 32.0030% ( 306) 00:09:56.335 10902.807 - 10962.385: 35.0151% ( 320) 00:09:56.335 10962.385 - 11021.964: 37.5000% ( 264) 00:09:56.335 11021.964 - 11081.542: 39.3637% ( 198) 00:09:56.335 11081.542 - 11141.120: 41.1050% ( 185) 00:09:56.335 11141.120 - 11200.698: 42.8276% ( 183) 00:09:56.335 11200.698 - 11260.276: 44.3430% ( 161) 00:09:56.335 11260.276 - 11319.855: 45.9902% ( 175) 00:09:56.335 11319.855 - 11379.433: 47.1950% ( 128) 00:09:56.335 11379.433 - 11439.011: 48.6352% ( 153) 00:09:56.335 11439.011 - 11498.589: 49.8494% ( 129) 00:09:56.335 11498.589 - 11558.167: 50.5930% ( 79) 00:09:56.335 11558.167 - 11617.745: 51.4495% ( 91) 00:09:56.335 11617.745 - 11677.324: 52.2496% ( 85) 00:09:56.335 11677.324 - 11736.902: 53.3038% ( 112) 00:09:56.335 11736.902 - 11796.480: 54.2828% ( 104) 00:09:56.335 11796.480 - 11856.058: 54.8569% ( 61) 00:09:56.335 11856.058 - 11915.636: 55.5346% ( 72) 00:09:56.335 11915.636 - 11975.215: 56.2123% ( 72) 00:09:56.335 11975.215 - 12034.793: 57.0877% ( 93) 00:09:56.335 12034.793 - 12094.371: 57.8878% ( 85) 00:09:56.335 12094.371 - 12153.949: 58.8573% ( 103) 00:09:56.335 12153.949 - 12213.527: 60.0715% ( 129) 00:09:56.335 12213.527 - 12273.105: 61.2858% ( 129) 00:09:56.335 12273.105 - 12332.684: 62.7447% ( 155) 00:09:56.335 12332.684 - 12392.262: 64.4390% ( 180) 00:09:56.335 12392.262 - 12451.840: 66.2274% ( 190) 00:09:56.335 12451.840 - 12511.418: 67.8087% ( 168) 00:09:56.335 12511.418 - 12570.996: 69.3430% ( 163) 00:09:56.335 12570.996 - 12630.575: 71.0467% ( 181) 00:09:56.335 12630.575 - 12690.153: 72.5151% ( 156) 00:09:56.335 12690.153 - 12749.731: 74.4447% ( 205) 00:09:56.335 12749.731 - 12809.309: 76.1860% ( 185) 00:09:56.335 12809.309 - 12868.887: 77.4849% ( 138) 00:09:56.335 12868.887 - 12928.465: 78.5862% ( 117) 00:09:56.335 12928.465 - 12988.044: 79.5181% ( 99) 00:09:56.335 12988.044 - 13047.622: 80.4688% ( 101) 00:09:56.335 13047.622 - 13107.200: 81.7771% ( 139) 00:09:56.335 13107.200 - 13166.778: 82.6619% ( 94) 00:09:56.335 13166.778 - 13226.356: 83.6126% ( 101) 00:09:56.335 13226.356 - 13285.935: 84.7139% ( 117) 00:09:56.335 13285.935 - 13345.513: 85.3257% ( 65) 00:09:56.335 13345.513 - 13405.091: 85.9752% ( 69) 00:09:56.335 13405.091 - 13464.669: 86.6246% ( 69) 00:09:56.335 13464.669 - 13524.247: 87.2176% ( 63) 00:09:56.335 13524.247 - 13583.825: 87.8953% ( 72) 00:09:56.335 13583.825 - 13643.404: 88.3848% ( 52) 00:09:56.335 13643.404 - 13702.982: 88.7613% ( 40) 00:09:56.335 13702.982 - 13762.560: 89.2131% ( 48) 00:09:56.335 13762.560 - 13822.138: 89.6461% ( 46) 00:09:56.335 13822.138 - 13881.716: 90.1073% ( 49) 00:09:56.335 13881.716 - 13941.295: 90.7474% ( 68) 00:09:56.335 13941.295 - 14000.873: 91.5474% ( 85) 00:09:56.335 14000.873 - 14060.451: 92.4134% ( 92) 00:09:56.335 14060.451 - 14120.029: 93.3264% ( 97) 00:09:56.335 14120.029 - 14179.607: 93.9759% ( 69) 00:09:56.335 14179.607 - 14239.185: 94.4559% ( 51) 00:09:56.335 14239.185 - 14298.764: 95.0772% ( 66) 00:09:56.335 14298.764 - 14358.342: 95.7172% ( 68) 00:09:56.335 14358.342 - 14417.920: 96.1785% ( 49) 00:09:56.335 14417.920 - 14477.498: 96.6867% ( 54) 00:09:56.335 14477.498 - 14537.076: 97.2233% ( 57) 00:09:56.336 14537.076 - 14596.655: 97.5621% ( 36) 00:09:56.336 14596.655 - 14656.233: 97.9104% ( 37) 00:09:56.336 14656.233 - 14715.811: 98.1457% ( 25) 00:09:56.336 14715.811 - 14775.389: 98.3340% ( 20) 00:09:56.336 14775.389 - 14834.967: 98.5034% ( 18) 00:09:56.336 14834.967 - 14894.545: 98.6446% ( 15) 00:09:56.336 14894.545 - 14954.124: 98.7011% ( 6) 00:09:56.336 14954.124 - 15013.702: 98.7481% ( 5) 00:09:56.336 15013.702 - 15073.280: 98.7669% ( 2) 00:09:56.336 15073.280 - 15132.858: 98.7858% ( 2) 00:09:56.336 15132.858 - 15192.436: 98.7952% ( 1) 00:09:56.336 22639.709 - 22758.865: 98.8611% ( 7) 00:09:56.336 22758.865 - 22878.022: 98.9081% ( 5) 00:09:56.336 22878.022 - 22997.178: 98.9834% ( 8) 00:09:56.336 22997.178 - 23116.335: 99.0681% ( 9) 00:09:56.336 23116.335 - 23235.491: 99.1434% ( 8) 00:09:56.336 23235.491 - 23354.647: 99.1717% ( 3) 00:09:56.336 23354.647 - 23473.804: 99.1905% ( 2) 00:09:56.336 23473.804 - 23592.960: 99.2188% ( 3) 00:09:56.336 23592.960 - 23712.116: 99.2470% ( 3) 00:09:56.336 23712.116 - 23831.273: 99.2752% ( 3) 00:09:56.336 23831.273 - 23950.429: 99.3035% ( 3) 00:09:56.336 23950.429 - 24069.585: 99.3411% ( 4) 00:09:56.336 24069.585 - 24188.742: 99.3599% ( 2) 00:09:56.336 24188.742 - 24307.898: 99.3882% ( 3) 00:09:56.336 24307.898 - 24427.055: 99.3976% ( 1) 00:09:56.336 29550.778 - 29669.935: 99.4070% ( 1) 00:09:56.336 29669.935 - 29789.091: 99.4635% ( 6) 00:09:56.336 29789.091 - 29908.247: 99.6141% ( 16) 00:09:56.336 30742.342 - 30980.655: 99.6611% ( 5) 00:09:56.336 30980.655 - 31218.967: 99.7176% ( 6) 00:09:56.336 31218.967 - 31457.280: 99.7741% ( 6) 00:09:56.336 31457.280 - 31695.593: 99.8494% ( 8) 00:09:56.336 31695.593 - 31933.905: 99.9153% ( 7) 00:09:56.336 31933.905 - 32172.218: 99.9812% ( 7) 00:09:56.336 32172.218 - 32410.531: 100.0000% ( 2) 00:09:56.336 00:09:56.336 17:16:06 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:56.336 00:09:56.336 real 0m2.682s 00:09:56.336 user 0m2.256s 00:09:56.336 sys 0m0.301s 00:09:56.336 17:16:06 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.336 17:16:06 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.336 ************************************ 00:09:56.336 END TEST nvme_perf 00:09:56.336 ************************************ 00:09:56.336 17:16:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:56.336 17:16:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:56.336 17:16:06 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:56.336 17:16:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.336 17:16:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.336 ************************************ 00:09:56.336 START TEST nvme_hello_world 00:09:56.336 ************************************ 00:09:56.336 17:16:06 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:56.594 Initializing NVMe Controllers 00:09:56.594 Attached to 0000:00:10.0 00:09:56.594 Namespace ID: 1 size: 6GB 00:09:56.594 Attached to 0000:00:11.0 00:09:56.594 Namespace ID: 1 size: 5GB 00:09:56.594 Attached to 0000:00:13.0 00:09:56.594 Namespace ID: 1 size: 1GB 00:09:56.594 Attached to 0000:00:12.0 00:09:56.594 Namespace ID: 1 size: 4GB 00:09:56.594 Namespace ID: 2 size: 4GB 00:09:56.594 Namespace ID: 3 size: 4GB 00:09:56.594 Initialization complete. 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 INFO: using host memory buffer for IO 00:09:56.594 Hello world! 00:09:56.594 00:09:56.594 real 0m0.285s 00:09:56.594 user 0m0.115s 00:09:56.594 sys 0m0.127s 00:09:56.594 17:16:07 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.594 17:16:07 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:56.594 ************************************ 00:09:56.594 END TEST nvme_hello_world 00:09:56.594 ************************************ 00:09:56.594 17:16:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:56.594 17:16:07 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:56.594 17:16:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.594 17:16:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.594 17:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.594 ************************************ 00:09:56.594 START TEST nvme_sgl 00:09:56.594 ************************************ 00:09:56.594 17:16:07 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:56.852 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:56.852 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:56.852 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:56.852 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:56.852 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:56.852 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:56.852 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:56.852 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:56.853 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:56.853 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:56.853 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:56.853 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:56.853 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:56.853 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:56.853 NVMe Readv/Writev Request test 00:09:56.853 Attached to 0000:00:10.0 00:09:56.853 Attached to 0000:00:11.0 00:09:56.853 Attached to 0000:00:13.0 00:09:56.853 Attached to 0000:00:12.0 00:09:56.853 0000:00:10.0: build_io_request_2 test passed 00:09:56.853 0000:00:10.0: build_io_request_4 test passed 00:09:56.853 0000:00:10.0: build_io_request_5 test passed 00:09:56.853 0000:00:10.0: build_io_request_6 test passed 00:09:56.853 0000:00:10.0: build_io_request_7 test passed 00:09:56.853 0000:00:10.0: build_io_request_10 test passed 00:09:56.853 0000:00:11.0: build_io_request_2 test passed 00:09:56.853 0000:00:11.0: build_io_request_4 test passed 00:09:56.853 0000:00:11.0: build_io_request_5 test passed 00:09:56.853 0000:00:11.0: build_io_request_6 test passed 00:09:56.853 0000:00:11.0: build_io_request_7 test passed 00:09:56.853 0000:00:11.0: build_io_request_10 test passed 00:09:56.853 Cleaning up... 00:09:56.853 00:09:56.853 real 0m0.368s 00:09:56.853 user 0m0.176s 00:09:56.853 sys 0m0.143s 00:09:56.853 17:16:07 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.853 ************************************ 00:09:56.853 END TEST nvme_sgl 00:09:56.853 ************************************ 00:09:56.853 17:16:07 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:57.111 17:16:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:57.111 17:16:07 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:57.111 17:16:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.111 17:16:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.111 17:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.111 ************************************ 00:09:57.111 START TEST nvme_e2edp 00:09:57.111 ************************************ 00:09:57.111 17:16:07 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:57.368 NVMe Write/Read with End-to-End data protection test 00:09:57.368 Attached to 0000:00:10.0 00:09:57.368 Attached to 0000:00:11.0 00:09:57.368 Attached to 0000:00:13.0 00:09:57.368 Attached to 0000:00:12.0 00:09:57.368 Cleaning up... 00:09:57.368 00:09:57.368 real 0m0.298s 00:09:57.368 user 0m0.111s 00:09:57.368 sys 0m0.142s 00:09:57.368 17:16:08 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.368 17:16:08 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:57.368 ************************************ 00:09:57.368 END TEST nvme_e2edp 00:09:57.368 ************************************ 00:09:57.368 17:16:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:57.368 17:16:08 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:57.368 17:16:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.368 17:16:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.368 17:16:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.368 ************************************ 00:09:57.368 START TEST nvme_reserve 00:09:57.368 ************************************ 00:09:57.368 17:16:08 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:57.625 ===================================================== 00:09:57.625 NVMe Controller at PCI bus 0, device 16, function 0 00:09:57.625 ===================================================== 00:09:57.625 Reservations: Not Supported 00:09:57.625 ===================================================== 00:09:57.625 NVMe Controller at PCI bus 0, device 17, function 0 00:09:57.625 ===================================================== 00:09:57.625 Reservations: Not Supported 00:09:57.625 ===================================================== 00:09:57.625 NVMe Controller at PCI bus 0, device 19, function 0 00:09:57.625 ===================================================== 00:09:57.625 Reservations: Not Supported 00:09:57.625 ===================================================== 00:09:57.625 NVMe Controller at PCI bus 0, device 18, function 0 00:09:57.625 ===================================================== 00:09:57.625 Reservations: Not Supported 00:09:57.625 Reservation test passed 00:09:57.625 00:09:57.625 real 0m0.285s 00:09:57.625 user 0m0.096s 00:09:57.625 sys 0m0.143s 00:09:57.625 17:16:08 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.625 17:16:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:57.625 ************************************ 00:09:57.625 END TEST nvme_reserve 00:09:57.625 ************************************ 00:09:57.625 17:16:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:57.625 17:16:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:57.625 17:16:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.625 17:16:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.625 17:16:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.625 ************************************ 00:09:57.625 START TEST nvme_err_injection 00:09:57.625 ************************************ 00:09:57.625 17:16:08 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:57.882 NVMe Error Injection test 00:09:57.882 Attached to 0000:00:10.0 00:09:57.882 Attached to 0000:00:11.0 00:09:57.882 Attached to 0000:00:13.0 00:09:57.882 Attached to 0000:00:12.0 00:09:57.882 0000:00:11.0: get features failed as expected 00:09:57.882 0000:00:13.0: get features failed as expected 00:09:57.882 0000:00:12.0: get features failed as expected 00:09:57.882 0000:00:10.0: get features failed as expected 00:09:57.882 0000:00:10.0: get features successfully as expected 00:09:57.882 0000:00:11.0: get features successfully as expected 00:09:57.882 0000:00:13.0: get features successfully as expected 00:09:57.882 0000:00:12.0: get features successfully as expected 00:09:57.882 0000:00:10.0: read failed as expected 00:09:57.882 0000:00:11.0: read failed as expected 00:09:57.882 0000:00:13.0: read failed as expected 00:09:57.882 0000:00:12.0: read failed as expected 00:09:57.882 0000:00:10.0: read successfully as expected 00:09:57.882 0000:00:11.0: read successfully as expected 00:09:57.882 0000:00:13.0: read successfully as expected 00:09:57.882 0000:00:12.0: read successfully as expected 00:09:57.882 Cleaning up... 00:09:57.882 00:09:57.882 real 0m0.296s 00:09:57.882 user 0m0.104s 00:09:57.882 sys 0m0.144s 00:09:57.882 17:16:08 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.882 17:16:08 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:57.882 ************************************ 00:09:57.882 END TEST nvme_err_injection 00:09:57.882 ************************************ 00:09:58.140 17:16:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:58.140 17:16:08 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:58.140 17:16:08 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:58.140 17:16:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.140 17:16:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.140 ************************************ 00:09:58.140 START TEST nvme_overhead 00:09:58.140 ************************************ 00:09:58.140 17:16:08 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:59.518 Initializing NVMe Controllers 00:09:59.518 Attached to 0000:00:10.0 00:09:59.518 Attached to 0000:00:11.0 00:09:59.518 Attached to 0000:00:13.0 00:09:59.518 Attached to 0000:00:12.0 00:09:59.518 Initialization complete. Launching workers. 00:09:59.518 submit (in ns) avg, min, max = 15727.4, 12145.5, 90498.6 00:09:59.518 complete (in ns) avg, min, max = 11723.3, 8557.3, 82762.7 00:09:59.518 00:09:59.518 Submit histogram 00:09:59.518 ================ 00:09:59.518 Range in us Cumulative Count 00:09:59.518 12.102 - 12.160: 0.0115% ( 1) 00:09:59.518 12.160 - 12.218: 0.0461% ( 3) 00:09:59.518 12.218 - 12.276: 0.0576% ( 1) 00:09:59.518 12.335 - 12.393: 0.0807% ( 2) 00:09:59.518 12.393 - 12.451: 0.1960% ( 10) 00:09:59.518 12.451 - 12.509: 0.3459% ( 13) 00:09:59.518 12.509 - 12.567: 0.5188% ( 15) 00:09:59.518 12.567 - 12.625: 0.7033% ( 16) 00:09:59.518 12.625 - 12.684: 0.9338% ( 20) 00:09:59.518 12.684 - 12.742: 1.1298% ( 17) 00:09:59.518 12.742 - 12.800: 1.5910% ( 40) 00:09:59.518 12.800 - 12.858: 2.4326% ( 73) 00:09:59.518 12.858 - 12.916: 3.3087% ( 76) 00:09:59.518 12.916 - 12.975: 4.2887% ( 85) 00:09:59.518 12.975 - 13.033: 5.0496% ( 66) 00:09:59.518 13.033 - 13.091: 5.7874% ( 64) 00:09:59.518 13.091 - 13.149: 6.8250% ( 90) 00:09:59.518 13.149 - 13.207: 8.4621% ( 142) 00:09:59.518 13.207 - 13.265: 10.6986% ( 194) 00:09:59.518 13.265 - 13.324: 12.5663% ( 162) 00:09:59.518 13.324 - 13.382: 14.4801% ( 166) 00:09:59.518 13.382 - 13.440: 15.9211% ( 125) 00:09:59.518 13.440 - 13.498: 18.1462% ( 193) 00:09:59.518 13.498 - 13.556: 22.1928% ( 351) 00:09:59.518 13.556 - 13.615: 28.7872% ( 572) 00:09:59.518 13.615 - 13.673: 36.1656% ( 640) 00:09:59.518 13.673 - 13.731: 42.7254% ( 569) 00:09:59.518 13.731 - 13.789: 48.2592% ( 480) 00:09:59.518 13.789 - 13.847: 51.5910% ( 289) 00:09:59.518 13.847 - 13.905: 54.1273% ( 220) 00:09:59.518 13.905 - 13.964: 56.3293% ( 191) 00:09:59.518 13.964 - 14.022: 57.7588% ( 124) 00:09:59.518 14.022 - 14.080: 59.0385% ( 111) 00:09:59.518 14.080 - 14.138: 60.0184% ( 85) 00:09:59.518 14.138 - 14.196: 61.0214% ( 87) 00:09:59.518 14.196 - 14.255: 61.6325% ( 53) 00:09:59.518 14.255 - 14.313: 62.2665% ( 55) 00:09:59.518 14.313 - 14.371: 62.7507% ( 42) 00:09:59.518 14.371 - 14.429: 63.0620% ( 27) 00:09:59.518 14.429 - 14.487: 63.3502% ( 25) 00:09:59.518 14.487 - 14.545: 63.5808% ( 20) 00:09:59.518 14.545 - 14.604: 63.7883% ( 18) 00:09:59.518 14.604 - 14.662: 63.9843% ( 17) 00:09:59.518 14.662 - 14.720: 64.1918% ( 18) 00:09:59.518 14.720 - 14.778: 64.3417% ( 13) 00:09:59.518 14.778 - 14.836: 64.4109% ( 6) 00:09:59.518 14.836 - 14.895: 64.4916% ( 7) 00:09:59.518 14.895 - 15.011: 64.7337% ( 21) 00:09:59.518 15.011 - 15.127: 64.8490% ( 10) 00:09:59.518 15.127 - 15.244: 64.9412% ( 8) 00:09:59.518 15.244 - 15.360: 65.1257% ( 16) 00:09:59.518 15.360 - 15.476: 65.2409% ( 10) 00:09:59.518 15.476 - 15.593: 65.3332% ( 8) 00:09:59.518 15.593 - 15.709: 65.4024% ( 6) 00:09:59.518 15.709 - 15.825: 65.4485% ( 4) 00:09:59.518 15.825 - 15.942: 65.4831% ( 3) 00:09:59.518 15.942 - 16.058: 65.5292% ( 4) 00:09:59.518 16.058 - 16.175: 65.5868% ( 5) 00:09:59.518 16.175 - 16.291: 65.5983% ( 1) 00:09:59.518 16.291 - 16.407: 66.0134% ( 36) 00:09:59.518 16.407 - 16.524: 68.7111% ( 234) 00:09:59.518 16.524 - 16.640: 74.2910% ( 484) 00:09:59.518 16.640 - 16.756: 79.4558% ( 448) 00:09:59.518 16.756 - 16.873: 81.4964% ( 177) 00:09:59.518 16.873 - 16.989: 83.0528% ( 135) 00:09:59.518 16.989 - 17.105: 84.0789% ( 89) 00:09:59.518 17.105 - 17.222: 85.1049% ( 89) 00:09:59.518 17.222 - 17.338: 86.0618% ( 83) 00:09:59.518 17.338 - 17.455: 86.7074% ( 56) 00:09:59.518 17.455 - 17.571: 87.0763% ( 32) 00:09:59.518 17.571 - 17.687: 87.3069% ( 20) 00:09:59.518 17.687 - 17.804: 87.5490% ( 21) 00:09:59.518 17.804 - 17.920: 87.7911% ( 21) 00:09:59.518 17.920 - 18.036: 88.0101% ( 19) 00:09:59.518 18.036 - 18.153: 88.2177% ( 18) 00:09:59.518 18.153 - 18.269: 88.4367% ( 19) 00:09:59.518 18.269 - 18.385: 88.5981% ( 14) 00:09:59.518 18.385 - 18.502: 88.7365% ( 12) 00:09:59.518 18.502 - 18.618: 88.8633% ( 11) 00:09:59.518 18.618 - 18.735: 89.0593% ( 17) 00:09:59.518 18.735 - 18.851: 89.2322% ( 15) 00:09:59.518 18.851 - 18.967: 89.5089% ( 24) 00:09:59.518 18.967 - 19.084: 89.8317% ( 28) 00:09:59.518 19.084 - 19.200: 90.0738% ( 21) 00:09:59.518 19.200 - 19.316: 90.2352% ( 14) 00:09:59.518 19.316 - 19.433: 90.3966% ( 14) 00:09:59.518 19.433 - 19.549: 90.5119% ( 10) 00:09:59.518 19.549 - 19.665: 90.5810% ( 6) 00:09:59.518 19.665 - 19.782: 90.7309% ( 13) 00:09:59.518 19.782 - 19.898: 90.8116% ( 7) 00:09:59.518 19.898 - 20.015: 90.9039% ( 8) 00:09:59.518 20.015 - 20.131: 91.0191% ( 10) 00:09:59.518 20.131 - 20.247: 91.1229% ( 9) 00:09:59.518 20.247 - 20.364: 91.1575% ( 3) 00:09:59.518 20.364 - 20.480: 91.3074% ( 13) 00:09:59.518 20.480 - 20.596: 91.3419% ( 3) 00:09:59.518 20.596 - 20.713: 91.3765% ( 3) 00:09:59.518 20.713 - 20.829: 91.4688% ( 8) 00:09:59.518 20.829 - 20.945: 91.6071% ( 12) 00:09:59.518 20.945 - 21.062: 91.7339% ( 11) 00:09:59.518 21.062 - 21.178: 91.7916% ( 5) 00:09:59.518 21.178 - 21.295: 91.8261% ( 3) 00:09:59.518 21.295 - 21.411: 91.8723% ( 4) 00:09:59.518 21.411 - 21.527: 91.9414% ( 6) 00:09:59.518 21.527 - 21.644: 92.0337% ( 8) 00:09:59.518 21.644 - 21.760: 92.0913% ( 5) 00:09:59.518 21.760 - 21.876: 92.1720% ( 7) 00:09:59.518 21.876 - 21.993: 92.2066% ( 3) 00:09:59.518 21.993 - 22.109: 92.2642% ( 5) 00:09:59.518 22.109 - 22.225: 92.3449% ( 7) 00:09:59.518 22.225 - 22.342: 92.4256% ( 7) 00:09:59.518 22.342 - 22.458: 92.5179% ( 8) 00:09:59.518 22.458 - 22.575: 92.5870% ( 6) 00:09:59.518 22.575 - 22.691: 92.6793% ( 8) 00:09:59.518 22.691 - 22.807: 92.7139% ( 3) 00:09:59.518 22.807 - 22.924: 92.7830% ( 6) 00:09:59.518 22.924 - 23.040: 92.8637% ( 7) 00:09:59.518 23.040 - 23.156: 92.9329% ( 6) 00:09:59.518 23.156 - 23.273: 93.0712% ( 12) 00:09:59.518 23.273 - 23.389: 93.0943% ( 2) 00:09:59.518 23.389 - 23.505: 93.2211% ( 11) 00:09:59.518 23.505 - 23.622: 93.2442% ( 2) 00:09:59.518 23.622 - 23.738: 93.2903% ( 4) 00:09:59.518 23.738 - 23.855: 93.3134% ( 2) 00:09:59.518 23.855 - 23.971: 93.3479% ( 3) 00:09:59.518 23.971 - 24.087: 93.4171% ( 6) 00:09:59.518 24.087 - 24.204: 93.4863% ( 6) 00:09:59.518 24.204 - 24.320: 93.5785% ( 8) 00:09:59.518 24.320 - 24.436: 93.6362% ( 5) 00:09:59.518 24.436 - 24.553: 93.6938% ( 5) 00:09:59.518 24.553 - 24.669: 93.8206% ( 11) 00:09:59.518 24.669 - 24.785: 93.9244% ( 9) 00:09:59.518 24.785 - 24.902: 93.9590% ( 3) 00:09:59.518 24.902 - 25.018: 94.0281% ( 6) 00:09:59.518 25.018 - 25.135: 94.1895% ( 14) 00:09:59.518 25.135 - 25.251: 94.2356% ( 4) 00:09:59.518 25.251 - 25.367: 94.2933% ( 5) 00:09:59.518 25.367 - 25.484: 94.3509% ( 5) 00:09:59.518 25.484 - 25.600: 94.4316% ( 7) 00:09:59.518 25.600 - 25.716: 94.4547% ( 2) 00:09:59.518 25.716 - 25.833: 94.4893% ( 3) 00:09:59.518 25.833 - 25.949: 94.5585% ( 6) 00:09:59.518 25.949 - 26.065: 94.6046% ( 4) 00:09:59.518 26.065 - 26.182: 94.6737% ( 6) 00:09:59.518 26.182 - 26.298: 94.7429% ( 6) 00:09:59.518 26.298 - 26.415: 94.7544% ( 1) 00:09:59.518 26.415 - 26.531: 94.7775% ( 2) 00:09:59.518 26.531 - 26.647: 94.8351% ( 5) 00:09:59.518 26.647 - 26.764: 94.9158% ( 7) 00:09:59.518 26.764 - 26.880: 94.9735% ( 5) 00:09:59.518 26.880 - 26.996: 95.0311% ( 5) 00:09:59.518 26.996 - 27.113: 95.1003% ( 6) 00:09:59.518 27.113 - 27.229: 95.1464% ( 4) 00:09:59.518 27.229 - 27.345: 95.1579% ( 1) 00:09:59.518 27.345 - 27.462: 95.2156% ( 5) 00:09:59.518 27.462 - 27.578: 95.3193% ( 9) 00:09:59.518 27.578 - 27.695: 95.4923% ( 15) 00:09:59.518 27.695 - 27.811: 95.6998% ( 18) 00:09:59.518 27.811 - 27.927: 95.9534% ( 22) 00:09:59.518 27.927 - 28.044: 96.1609% ( 18) 00:09:59.518 28.044 - 28.160: 96.2532% ( 8) 00:09:59.518 28.160 - 28.276: 96.5414% ( 25) 00:09:59.518 28.276 - 28.393: 96.9449% ( 35) 00:09:59.518 28.393 - 28.509: 97.1755% ( 20) 00:09:59.518 28.509 - 28.625: 97.4752% ( 26) 00:09:59.518 28.625 - 28.742: 97.7173% ( 21) 00:09:59.518 28.742 - 28.858: 97.8787% ( 14) 00:09:59.518 28.858 - 28.975: 98.1208% ( 21) 00:09:59.518 28.975 - 29.091: 98.2131% ( 8) 00:09:59.519 29.091 - 29.207: 98.3860% ( 15) 00:09:59.519 29.207 - 29.324: 98.5243% ( 12) 00:09:59.519 29.324 - 29.440: 98.5704% ( 4) 00:09:59.519 29.440 - 29.556: 98.6742% ( 9) 00:09:59.519 29.556 - 29.673: 98.8010% ( 11) 00:09:59.519 29.673 - 29.789: 98.8817% ( 7) 00:09:59.519 29.789 - 30.022: 98.9394% ( 5) 00:09:59.519 30.022 - 30.255: 99.0431% ( 9) 00:09:59.519 30.255 - 30.487: 99.1469% ( 9) 00:09:59.519 30.487 - 30.720: 99.1815% ( 3) 00:09:59.519 30.720 - 30.953: 99.2160% ( 3) 00:09:59.519 30.953 - 31.185: 99.2622% ( 4) 00:09:59.519 31.185 - 31.418: 99.2852% ( 2) 00:09:59.519 31.418 - 31.651: 99.2967% ( 1) 00:09:59.519 31.651 - 31.884: 99.3083% ( 1) 00:09:59.519 31.884 - 32.116: 99.3429% ( 3) 00:09:59.519 32.116 - 32.349: 99.3544% ( 1) 00:09:59.519 32.349 - 32.582: 99.3774% ( 2) 00:09:59.519 32.582 - 32.815: 99.3890% ( 1) 00:09:59.519 32.815 - 33.047: 99.4236% ( 3) 00:09:59.519 33.047 - 33.280: 99.4466% ( 2) 00:09:59.519 33.280 - 33.513: 99.4582% ( 1) 00:09:59.519 33.513 - 33.745: 99.4697% ( 1) 00:09:59.519 33.745 - 33.978: 99.5043% ( 3) 00:09:59.519 33.978 - 34.211: 99.5273% ( 2) 00:09:59.519 34.211 - 34.444: 99.5619% ( 3) 00:09:59.519 34.444 - 34.676: 99.5850% ( 2) 00:09:59.519 35.142 - 35.375: 99.5965% ( 1) 00:09:59.519 35.375 - 35.607: 99.6196% ( 2) 00:09:59.519 35.840 - 36.073: 99.6311% ( 1) 00:09:59.519 37.702 - 37.935: 99.6426% ( 1) 00:09:59.519 38.167 - 38.400: 99.6541% ( 1) 00:09:59.519 38.865 - 39.098: 99.6657% ( 1) 00:09:59.519 39.098 - 39.331: 99.6772% ( 1) 00:09:59.519 40.262 - 40.495: 99.6887% ( 1) 00:09:59.519 41.425 - 41.658: 99.7003% ( 1) 00:09:59.519 42.356 - 42.589: 99.7118% ( 1) 00:09:59.519 42.822 - 43.055: 99.7348% ( 2) 00:09:59.519 43.055 - 43.287: 99.7464% ( 1) 00:09:59.519 43.287 - 43.520: 99.7694% ( 2) 00:09:59.519 43.520 - 43.753: 99.8271% ( 5) 00:09:59.519 43.753 - 43.985: 99.8501% ( 2) 00:09:59.519 43.985 - 44.218: 99.8617% ( 1) 00:09:59.519 44.451 - 44.684: 99.8732% ( 1) 00:09:59.519 44.684 - 44.916: 99.8962% ( 2) 00:09:59.519 44.916 - 45.149: 99.9078% ( 1) 00:09:59.519 45.382 - 45.615: 99.9193% ( 1) 00:09:59.519 45.615 - 45.847: 99.9308% ( 1) 00:09:59.519 46.545 - 46.778: 99.9424% ( 1) 00:09:59.519 47.244 - 47.476: 99.9539% ( 1) 00:09:59.519 50.967 - 51.200: 99.9654% ( 1) 00:09:59.519 56.553 - 56.785: 99.9769% ( 1) 00:09:59.519 80.989 - 81.455: 99.9885% ( 1) 00:09:59.519 90.298 - 90.764: 100.0000% ( 1) 00:09:59.519 00:09:59.519 Complete histogram 00:09:59.519 ================== 00:09:59.519 Range in us Cumulative Count 00:09:59.519 8.553 - 8.611: 0.0115% ( 1) 00:09:59.519 8.727 - 8.785: 0.0346% ( 2) 00:09:59.519 8.785 - 8.844: 0.0692% ( 3) 00:09:59.519 8.844 - 8.902: 0.1038% ( 3) 00:09:59.519 8.902 - 8.960: 0.1960% ( 8) 00:09:59.519 8.960 - 9.018: 0.3459% ( 13) 00:09:59.519 9.018 - 9.076: 0.5188% ( 15) 00:09:59.519 9.076 - 9.135: 0.8531% ( 29) 00:09:59.519 9.135 - 9.193: 1.3950% ( 47) 00:09:59.519 9.193 - 9.251: 2.0175% ( 54) 00:09:59.519 9.251 - 9.309: 2.7438% ( 63) 00:09:59.519 9.309 - 9.367: 3.5854% ( 73) 00:09:59.519 9.367 - 9.425: 4.7383% ( 100) 00:09:59.519 9.425 - 9.484: 6.1794% ( 125) 00:09:59.519 9.484 - 9.542: 7.8280% ( 143) 00:09:59.519 9.542 - 9.600: 9.1423% ( 114) 00:09:59.519 9.600 - 9.658: 11.3788% ( 194) 00:09:59.519 9.658 - 9.716: 15.7252% ( 377) 00:09:59.519 9.716 - 9.775: 22.3542% ( 575) 00:09:59.519 9.775 - 9.833: 30.9085% ( 742) 00:09:59.519 9.833 - 9.891: 38.5635% ( 664) 00:09:59.519 9.891 - 9.949: 44.4316% ( 509) 00:09:59.519 9.949 - 10.007: 49.2506% ( 418) 00:09:59.519 10.007 - 10.065: 52.6401% ( 294) 00:09:59.519 10.065 - 10.124: 55.2225% ( 224) 00:09:59.519 10.124 - 10.182: 57.1363% ( 166) 00:09:59.519 10.182 - 10.240: 58.5312% ( 121) 00:09:59.519 10.240 - 10.298: 59.2230% ( 60) 00:09:59.519 10.298 - 10.356: 59.7994% ( 50) 00:09:59.519 10.356 - 10.415: 60.2951% ( 43) 00:09:59.519 10.415 - 10.473: 60.7217% ( 37) 00:09:59.519 10.473 - 10.531: 61.0560% ( 29) 00:09:59.519 10.531 - 10.589: 61.3673% ( 27) 00:09:59.519 10.589 - 10.647: 61.7132% ( 30) 00:09:59.519 10.647 - 10.705: 62.0706% ( 31) 00:09:59.519 10.705 - 10.764: 62.4856% ( 36) 00:09:59.519 10.764 - 10.822: 63.1081% ( 54) 00:09:59.519 10.822 - 10.880: 63.7883% ( 59) 00:09:59.519 10.880 - 10.938: 64.4109% ( 54) 00:09:59.519 10.938 - 10.996: 64.9527% ( 47) 00:09:59.519 10.996 - 11.055: 65.3447% ( 34) 00:09:59.519 11.055 - 11.113: 65.6445% ( 26) 00:09:59.519 11.113 - 11.171: 65.9327% ( 25) 00:09:59.519 11.171 - 11.229: 66.1056% ( 15) 00:09:59.519 11.229 - 11.287: 66.1517% ( 4) 00:09:59.519 11.287 - 11.345: 66.2324% ( 7) 00:09:59.519 11.345 - 11.404: 66.2785% ( 4) 00:09:59.519 11.404 - 11.462: 66.4399% ( 14) 00:09:59.519 11.462 - 11.520: 66.5206% ( 7) 00:09:59.519 11.520 - 11.578: 66.5783% ( 5) 00:09:59.519 11.578 - 11.636: 66.6475% ( 6) 00:09:59.519 11.636 - 11.695: 66.6705% ( 2) 00:09:59.519 11.695 - 11.753: 66.7512% ( 7) 00:09:59.519 11.753 - 11.811: 66.8089% ( 5) 00:09:59.519 11.811 - 11.869: 66.8550% ( 4) 00:09:59.519 11.869 - 11.927: 67.1317% ( 24) 00:09:59.519 11.927 - 11.985: 68.1001% ( 84) 00:09:59.519 11.985 - 12.044: 70.1291% ( 176) 00:09:59.519 12.044 - 12.102: 73.2995% ( 275) 00:09:59.519 12.102 - 12.160: 77.0694% ( 327) 00:09:59.519 12.160 - 12.218: 79.9746% ( 252) 00:09:59.519 12.218 - 12.276: 81.9576% ( 172) 00:09:59.519 12.276 - 12.335: 83.2488% ( 112) 00:09:59.519 12.335 - 12.393: 84.0212% ( 67) 00:09:59.519 12.393 - 12.451: 84.5976% ( 50) 00:09:59.519 12.451 - 12.509: 84.9896% ( 34) 00:09:59.519 12.509 - 12.567: 85.2778% ( 25) 00:09:59.519 12.567 - 12.625: 85.5430% ( 23) 00:09:59.519 12.625 - 12.684: 85.6352% ( 8) 00:09:59.519 12.684 - 12.742: 85.8312% ( 17) 00:09:59.519 12.742 - 12.800: 85.9004% ( 6) 00:09:59.519 12.800 - 12.858: 86.0387% ( 12) 00:09:59.519 12.858 - 12.916: 86.1194% ( 7) 00:09:59.519 12.916 - 12.975: 86.2924% ( 15) 00:09:59.519 12.975 - 13.033: 86.4307% ( 12) 00:09:59.519 13.033 - 13.091: 86.6498% ( 19) 00:09:59.519 13.091 - 13.149: 86.9380% ( 25) 00:09:59.519 13.149 - 13.207: 87.2608% ( 28) 00:09:59.519 13.207 - 13.265: 87.6066% ( 30) 00:09:59.519 13.265 - 13.324: 88.1024% ( 43) 00:09:59.519 13.324 - 13.382: 88.5289% ( 37) 00:09:59.519 13.382 - 13.440: 88.7941% ( 23) 00:09:59.519 13.440 - 13.498: 88.9786% ( 16) 00:09:59.519 13.498 - 13.556: 89.1400% ( 14) 00:09:59.519 13.556 - 13.615: 89.3359% ( 17) 00:09:59.519 13.615 - 13.673: 89.4743% ( 12) 00:09:59.519 13.673 - 13.731: 89.5665% ( 8) 00:09:59.519 13.731 - 13.789: 89.6357% ( 6) 00:09:59.519 13.789 - 13.847: 89.7164% ( 7) 00:09:59.519 13.847 - 13.905: 89.7625% ( 4) 00:09:59.519 13.905 - 13.964: 89.8086% ( 4) 00:09:59.519 13.964 - 14.022: 89.8663% ( 5) 00:09:59.519 14.022 - 14.080: 89.9124% ( 4) 00:09:59.519 14.080 - 14.138: 89.9700% ( 5) 00:09:59.519 14.138 - 14.196: 90.0046% ( 3) 00:09:59.519 14.196 - 14.255: 90.0277% ( 2) 00:09:59.519 14.255 - 14.313: 90.0853% ( 5) 00:09:59.519 14.313 - 14.371: 90.1084% ( 2) 00:09:59.519 14.371 - 14.429: 90.1545% ( 4) 00:09:59.519 14.429 - 14.487: 90.2006% ( 4) 00:09:59.519 14.487 - 14.545: 90.2582% ( 5) 00:09:59.519 14.545 - 14.604: 90.3044% ( 4) 00:09:59.519 14.604 - 14.662: 90.3505% ( 4) 00:09:59.519 14.662 - 14.720: 90.4196% ( 6) 00:09:59.519 14.720 - 14.778: 90.5003% ( 7) 00:09:59.519 14.778 - 14.836: 90.5234% ( 2) 00:09:59.519 14.836 - 14.895: 90.6156% ( 8) 00:09:59.519 14.895 - 15.011: 90.7194% ( 9) 00:09:59.519 15.011 - 15.127: 90.8577% ( 12) 00:09:59.519 15.127 - 15.244: 90.8923% ( 3) 00:09:59.519 15.244 - 15.360: 90.9500% ( 5) 00:09:59.519 15.360 - 15.476: 91.0537% ( 9) 00:09:59.519 15.476 - 15.593: 91.1344% ( 7) 00:09:59.519 15.593 - 15.709: 91.2151% ( 7) 00:09:59.519 15.709 - 15.825: 91.2843% ( 6) 00:09:59.519 15.825 - 15.942: 91.3996% ( 10) 00:09:59.519 15.942 - 16.058: 91.5379% ( 12) 00:09:59.519 16.058 - 16.175: 91.6647% ( 11) 00:09:59.519 16.175 - 16.291: 91.8031% ( 12) 00:09:59.519 16.291 - 16.407: 91.8953% ( 8) 00:09:59.519 16.407 - 16.524: 91.9530% ( 5) 00:09:59.519 16.524 - 16.640: 92.0337% ( 7) 00:09:59.519 16.756 - 16.873: 92.1259% ( 8) 00:09:59.519 16.873 - 16.989: 92.1490% ( 2) 00:09:59.519 16.989 - 17.105: 92.2297% ( 7) 00:09:59.519 17.105 - 17.222: 92.2988% ( 6) 00:09:59.519 17.222 - 17.338: 92.4256% ( 11) 00:09:59.519 17.338 - 17.455: 92.5063% ( 7) 00:09:59.519 17.455 - 17.571: 92.6677% ( 14) 00:09:59.519 17.571 - 17.687: 92.7830% ( 10) 00:09:59.519 17.687 - 17.804: 92.8407% ( 5) 00:09:59.519 17.804 - 17.920: 92.9098% ( 6) 00:09:59.519 17.920 - 18.036: 92.9560% ( 4) 00:09:59.519 18.036 - 18.153: 92.9675% ( 1) 00:09:59.519 18.153 - 18.269: 93.0136% ( 4) 00:09:59.519 18.269 - 18.385: 93.0712% ( 5) 00:09:59.519 18.385 - 18.502: 93.1635% ( 8) 00:09:59.519 18.502 - 18.618: 93.2096% ( 4) 00:09:59.520 18.618 - 18.735: 93.2903% ( 7) 00:09:59.520 18.735 - 18.851: 93.3249% ( 3) 00:09:59.520 18.851 - 18.967: 93.3595% ( 3) 00:09:59.520 18.967 - 19.084: 93.4748% ( 10) 00:09:59.520 19.084 - 19.200: 93.5324% ( 5) 00:09:59.520 19.200 - 19.316: 93.5900% ( 5) 00:09:59.520 19.316 - 19.433: 93.6016% ( 1) 00:09:59.520 19.433 - 19.549: 93.6362% ( 3) 00:09:59.520 19.549 - 19.665: 93.6823% ( 4) 00:09:59.520 19.665 - 19.782: 93.6938% ( 1) 00:09:59.520 19.782 - 19.898: 93.7284% ( 3) 00:09:59.520 19.898 - 20.015: 93.7399% ( 1) 00:09:59.520 20.015 - 20.131: 93.7745% ( 3) 00:09:59.520 20.131 - 20.247: 93.8437% ( 6) 00:09:59.520 20.247 - 20.364: 93.9128% ( 6) 00:09:59.520 20.364 - 20.480: 93.9820% ( 6) 00:09:59.520 20.480 - 20.596: 94.0742% ( 8) 00:09:59.520 20.596 - 20.713: 94.1319% ( 5) 00:09:59.520 20.713 - 20.829: 94.2587% ( 11) 00:09:59.520 20.829 - 20.945: 94.3279% ( 6) 00:09:59.520 20.945 - 21.062: 94.4316% ( 9) 00:09:59.520 21.062 - 21.178: 94.4662% ( 3) 00:09:59.520 21.178 - 21.295: 94.5469% ( 7) 00:09:59.520 21.295 - 21.411: 94.6276% ( 7) 00:09:59.520 21.411 - 21.527: 94.6507% ( 2) 00:09:59.520 21.527 - 21.644: 94.7083% ( 5) 00:09:59.520 21.644 - 21.760: 94.7544% ( 4) 00:09:59.520 21.760 - 21.876: 94.8121% ( 5) 00:09:59.520 21.876 - 21.993: 94.8351% ( 2) 00:09:59.520 21.993 - 22.109: 94.8928% ( 5) 00:09:59.520 22.109 - 22.225: 94.9043% ( 1) 00:09:59.520 22.225 - 22.342: 94.9274% ( 2) 00:09:59.520 22.342 - 22.458: 94.9504% ( 2) 00:09:59.520 22.458 - 22.575: 94.9735% ( 2) 00:09:59.520 22.575 - 22.691: 94.9965% ( 2) 00:09:59.520 22.807 - 22.924: 95.0427% ( 4) 00:09:59.520 22.924 - 23.040: 95.0772% ( 3) 00:09:59.520 23.040 - 23.156: 95.0888% ( 1) 00:09:59.520 23.156 - 23.273: 95.1118% ( 2) 00:09:59.520 23.273 - 23.389: 95.1464% ( 3) 00:09:59.520 23.389 - 23.505: 95.1810% ( 3) 00:09:59.520 23.505 - 23.622: 95.1925% ( 1) 00:09:59.520 23.622 - 23.738: 95.2617% ( 6) 00:09:59.520 23.738 - 23.855: 95.3078% ( 4) 00:09:59.520 23.855 - 23.971: 95.4577% ( 13) 00:09:59.520 23.971 - 24.087: 95.6076% ( 13) 00:09:59.520 24.087 - 24.204: 95.8266% ( 19) 00:09:59.520 24.204 - 24.320: 96.0687% ( 21) 00:09:59.520 24.320 - 24.436: 96.5644% ( 43) 00:09:59.520 24.436 - 24.553: 96.9218% ( 31) 00:09:59.520 24.553 - 24.669: 97.1524% ( 20) 00:09:59.520 24.669 - 24.785: 97.4060% ( 22) 00:09:59.520 24.785 - 24.902: 97.6712% ( 23) 00:09:59.520 24.902 - 25.018: 97.8902% ( 19) 00:09:59.520 25.018 - 25.135: 98.0632% ( 15) 00:09:59.520 25.135 - 25.251: 98.2476% ( 16) 00:09:59.520 25.251 - 25.367: 98.3514% ( 9) 00:09:59.520 25.367 - 25.484: 98.4436% ( 8) 00:09:59.520 25.484 - 25.600: 98.5243% ( 7) 00:09:59.520 25.600 - 25.716: 98.6050% ( 7) 00:09:59.520 25.716 - 25.833: 98.6742% ( 6) 00:09:59.520 25.833 - 25.949: 98.8125% ( 12) 00:09:59.520 25.949 - 26.065: 98.8817% ( 6) 00:09:59.520 26.065 - 26.182: 98.9394% ( 5) 00:09:59.520 26.182 - 26.298: 98.9855% ( 4) 00:09:59.520 26.298 - 26.415: 99.0316% ( 4) 00:09:59.520 26.415 - 26.531: 99.0431% ( 1) 00:09:59.520 26.531 - 26.647: 99.0662% ( 2) 00:09:59.520 26.647 - 26.764: 99.1008% ( 3) 00:09:59.520 26.880 - 26.996: 99.1238% ( 2) 00:09:59.520 26.996 - 27.113: 99.1469% ( 2) 00:09:59.520 27.113 - 27.229: 99.1584% ( 1) 00:09:59.520 27.229 - 27.345: 99.1815% ( 2) 00:09:59.520 27.462 - 27.578: 99.1930% ( 1) 00:09:59.520 27.578 - 27.695: 99.2045% ( 1) 00:09:59.520 28.160 - 28.276: 99.2160% ( 1) 00:09:59.520 28.625 - 28.742: 99.2276% ( 1) 00:09:59.520 28.858 - 28.975: 99.2391% ( 1) 00:09:59.520 29.091 - 29.207: 99.2506% ( 1) 00:09:59.520 29.789 - 30.022: 99.2622% ( 1) 00:09:59.520 30.022 - 30.255: 99.2737% ( 1) 00:09:59.520 30.255 - 30.487: 99.2967% ( 2) 00:09:59.520 30.487 - 30.720: 99.3083% ( 1) 00:09:59.520 30.720 - 30.953: 99.3544% ( 4) 00:09:59.520 30.953 - 31.185: 99.3659% ( 1) 00:09:59.520 31.185 - 31.418: 99.3890% ( 2) 00:09:59.520 31.418 - 31.651: 99.4120% ( 2) 00:09:59.520 31.651 - 31.884: 99.4351% ( 2) 00:09:59.520 31.884 - 32.116: 99.4466% ( 1) 00:09:59.520 32.116 - 32.349: 99.4582% ( 1) 00:09:59.520 32.582 - 32.815: 99.4812% ( 2) 00:09:59.520 32.815 - 33.047: 99.4927% ( 1) 00:09:59.520 33.047 - 33.280: 99.5158% ( 2) 00:09:59.520 33.280 - 33.513: 99.5273% ( 1) 00:09:59.520 33.978 - 34.211: 99.5389% ( 1) 00:09:59.520 34.211 - 34.444: 99.5504% ( 1) 00:09:59.520 34.444 - 34.676: 99.5619% ( 1) 00:09:59.520 34.676 - 34.909: 99.5734% ( 1) 00:09:59.520 34.909 - 35.142: 99.5850% ( 1) 00:09:59.520 35.142 - 35.375: 99.6080% ( 2) 00:09:59.520 35.607 - 35.840: 99.6196% ( 1) 00:09:59.520 35.840 - 36.073: 99.6311% ( 1) 00:09:59.520 38.400 - 38.633: 99.6426% ( 1) 00:09:59.520 38.865 - 39.098: 99.6541% ( 1) 00:09:59.520 39.098 - 39.331: 99.6657% ( 1) 00:09:59.520 39.331 - 39.564: 99.7003% ( 3) 00:09:59.520 39.564 - 39.796: 99.7233% ( 2) 00:09:59.520 39.796 - 40.029: 99.7579% ( 3) 00:09:59.520 40.029 - 40.262: 99.7925% ( 3) 00:09:59.520 40.262 - 40.495: 99.8732% ( 7) 00:09:59.520 40.495 - 40.727: 99.8847% ( 1) 00:09:59.520 40.727 - 40.960: 99.9078% ( 2) 00:09:59.520 43.985 - 44.218: 99.9193% ( 1) 00:09:59.520 46.545 - 46.778: 99.9308% ( 1) 00:09:59.520 47.476 - 47.709: 99.9539% ( 2) 00:09:59.520 47.942 - 48.175: 99.9654% ( 1) 00:09:59.520 50.502 - 50.735: 99.9769% ( 1) 00:09:59.520 54.924 - 55.156: 99.9885% ( 1) 00:09:59.520 82.385 - 82.851: 100.0000% ( 1) 00:09:59.520 00:09:59.520 00:09:59.520 real 0m1.299s 00:09:59.520 user 0m1.088s 00:09:59.520 sys 0m0.152s 00:09:59.520 17:16:10 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.520 17:16:10 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:59.520 ************************************ 00:09:59.520 END TEST nvme_overhead 00:09:59.520 ************************************ 00:09:59.520 17:16:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:59.520 17:16:10 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:59.520 17:16:10 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:59.520 17:16:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.520 17:16:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.520 ************************************ 00:09:59.520 START TEST nvme_arbitration 00:09:59.520 ************************************ 00:09:59.520 17:16:10 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:02.803 Initializing NVMe Controllers 00:10:02.803 Attached to 0000:00:10.0 00:10:02.803 Attached to 0000:00:11.0 00:10:02.803 Attached to 0000:00:13.0 00:10:02.803 Attached to 0000:00:12.0 00:10:02.803 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:02.803 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:02.803 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:02.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:02.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:02.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:02.803 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:02.803 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:02.803 Initialization complete. Launching workers. 00:10:02.803 Starting thread on core 1 with urgent priority queue 00:10:02.803 Starting thread on core 2 with urgent priority queue 00:10:02.803 Starting thread on core 3 with urgent priority queue 00:10:02.803 Starting thread on core 0 with urgent priority queue 00:10:02.803 QEMU NVMe Ctrl (12340 ) core 0: 3178.67 IO/s 31.46 secs/100000 ios 00:10:02.803 QEMU NVMe Ctrl (12342 ) core 0: 3176.67 IO/s 31.48 secs/100000 ios 00:10:02.803 QEMU NVMe Ctrl (12341 ) core 1: 2983.67 IO/s 33.52 secs/100000 ios 00:10:02.803 QEMU NVMe Ctrl (12342 ) core 1: 2986.67 IO/s 33.48 secs/100000 ios 00:10:02.803 QEMU NVMe Ctrl (12343 ) core 2: 3254.67 IO/s 30.73 secs/100000 ios 00:10:02.803 QEMU NVMe Ctrl (12342 ) core 3: 3196.67 IO/s 31.28 secs/100000 ios 00:10:02.803 ======================================================== 00:10:02.803 00:10:02.803 00:10:02.803 real 0m3.370s 00:10:02.803 user 0m9.089s 00:10:02.803 sys 0m0.201s 00:10:02.803 17:16:13 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.803 17:16:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:02.803 ************************************ 00:10:02.803 END TEST nvme_arbitration 00:10:02.803 ************************************ 00:10:02.803 17:16:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:02.803 17:16:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:02.803 17:16:13 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:02.803 17:16:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.803 17:16:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.803 ************************************ 00:10:02.803 START TEST nvme_single_aen 00:10:02.803 ************************************ 00:10:02.803 17:16:13 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:03.060 Asynchronous Event Request test 00:10:03.060 Attached to 0000:00:10.0 00:10:03.060 Attached to 0000:00:11.0 00:10:03.060 Attached to 0000:00:13.0 00:10:03.060 Attached to 0000:00:12.0 00:10:03.060 Reset controller to setup AER completions for this process 00:10:03.060 Registering asynchronous event callbacks... 00:10:03.060 Getting orig temperature thresholds of all controllers 00:10:03.060 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:03.060 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:03.060 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:03.060 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:03.060 Setting all controllers temperature threshold low to trigger AER 00:10:03.060 Waiting for all controllers temperature threshold to be set lower 00:10:03.060 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:03.060 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:03.060 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:03.060 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:03.060 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:03.060 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:03.060 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:03.060 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:03.060 Waiting for all controllers to trigger AER and reset threshold 00:10:03.060 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:03.060 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:03.060 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:03.060 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:03.060 Cleaning up... 00:10:03.060 00:10:03.060 real 0m0.274s 00:10:03.060 user 0m0.101s 00:10:03.060 sys 0m0.134s 00:10:03.060 17:16:13 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.060 ************************************ 00:10:03.060 END TEST nvme_single_aen 00:10:03.060 ************************************ 00:10:03.060 17:16:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:03.060 17:16:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:03.060 17:16:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:03.060 17:16:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:03.060 17:16:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.060 17:16:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.060 ************************************ 00:10:03.060 START TEST nvme_doorbell_aers 00:10:03.060 ************************************ 00:10:03.060 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:10:03.060 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:03.060 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:03.060 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:03.061 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:03.318 17:16:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:03.318 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:03.318 17:16:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:03.576 [2024-07-15 17:16:14.190633] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:13.637 Executing: test_write_invalid_db 00:10:13.637 Waiting for AER completion... 00:10:13.637 Failure: test_write_invalid_db 00:10:13.637 00:10:13.637 Executing: test_invalid_db_write_overflow_sq 00:10:13.637 Waiting for AER completion... 00:10:13.637 Failure: test_invalid_db_write_overflow_sq 00:10:13.637 00:10:13.637 Executing: test_invalid_db_write_overflow_cq 00:10:13.637 Waiting for AER completion... 00:10:13.637 Failure: test_invalid_db_write_overflow_cq 00:10:13.637 00:10:13.637 17:16:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:13.638 17:16:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:13.638 [2024-07-15 17:16:24.245663] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:23.628 Executing: test_write_invalid_db 00:10:23.628 Waiting for AER completion... 00:10:23.628 Failure: test_write_invalid_db 00:10:23.628 00:10:23.628 Executing: test_invalid_db_write_overflow_sq 00:10:23.628 Waiting for AER completion... 00:10:23.628 Failure: test_invalid_db_write_overflow_sq 00:10:23.628 00:10:23.628 Executing: test_invalid_db_write_overflow_cq 00:10:23.628 Waiting for AER completion... 00:10:23.628 Failure: test_invalid_db_write_overflow_cq 00:10:23.628 00:10:23.628 17:16:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:23.628 17:16:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:23.628 [2024-07-15 17:16:34.299704] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:33.590 Executing: test_write_invalid_db 00:10:33.590 Waiting for AER completion... 00:10:33.590 Failure: test_write_invalid_db 00:10:33.590 00:10:33.590 Executing: test_invalid_db_write_overflow_sq 00:10:33.590 Waiting for AER completion... 00:10:33.590 Failure: test_invalid_db_write_overflow_sq 00:10:33.590 00:10:33.590 Executing: test_invalid_db_write_overflow_cq 00:10:33.590 Waiting for AER completion... 00:10:33.590 Failure: test_invalid_db_write_overflow_cq 00:10:33.590 00:10:33.590 17:16:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:33.590 17:16:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:33.590 [2024-07-15 17:16:44.343230] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.550 Executing: test_write_invalid_db 00:10:43.550 Waiting for AER completion... 00:10:43.550 Failure: test_write_invalid_db 00:10:43.550 00:10:43.550 Executing: test_invalid_db_write_overflow_sq 00:10:43.550 Waiting for AER completion... 00:10:43.550 Failure: test_invalid_db_write_overflow_sq 00:10:43.550 00:10:43.550 Executing: test_invalid_db_write_overflow_cq 00:10:43.550 Waiting for AER completion... 00:10:43.550 Failure: test_invalid_db_write_overflow_cq 00:10:43.550 00:10:43.550 00:10:43.550 real 0m40.259s 00:10:43.550 user 0m34.064s 00:10:43.550 sys 0m5.803s 00:10:43.550 17:16:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.550 ************************************ 00:10:43.550 END TEST nvme_doorbell_aers 00:10:43.550 ************************************ 00:10:43.550 17:16:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:43.550 17:16:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:43.550 17:16:54 nvme -- nvme/nvme.sh@97 -- # uname 00:10:43.550 17:16:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:43.550 17:16:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:43.550 17:16:54 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:43.550 17:16:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.550 17:16:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.550 ************************************ 00:10:43.550 START TEST nvme_multi_aen 00:10:43.550 ************************************ 00:10:43.550 17:16:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:43.808 [2024-07-15 17:16:54.423732] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.423905] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.423971] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.426453] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.426528] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.426569] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.428587] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.428664] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.428708] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.430766] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.430828] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 [2024-07-15 17:16:54.430872] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82455) is not found. Dropping the request. 00:10:43.808 Child process pid: 82966 00:10:44.066 [Child] Asynchronous Event Request test 00:10:44.066 [Child] Attached to 0000:00:10.0 00:10:44.066 [Child] Attached to 0000:00:11.0 00:10:44.066 [Child] Attached to 0000:00:13.0 00:10:44.066 [Child] Attached to 0000:00:12.0 00:10:44.066 [Child] Registering asynchronous event callbacks... 00:10:44.066 [Child] Getting orig temperature thresholds of all controllers 00:10:44.066 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:44.066 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 [Child] Cleaning up... 00:10:44.066 Asynchronous Event Request test 00:10:44.066 Attached to 0000:00:10.0 00:10:44.066 Attached to 0000:00:11.0 00:10:44.066 Attached to 0000:00:13.0 00:10:44.066 Attached to 0000:00:12.0 00:10:44.066 Reset controller to setup AER completions for this process 00:10:44.066 Registering asynchronous event callbacks... 00:10:44.066 Getting orig temperature thresholds of all controllers 00:10:44.066 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:44.066 Setting all controllers temperature threshold low to trigger AER 00:10:44.066 Waiting for all controllers temperature threshold to be set lower 00:10:44.066 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:44.066 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:44.066 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:44.066 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:44.066 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:44.066 Waiting for all controllers to trigger AER and reset threshold 00:10:44.066 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.066 Cleaning up... 00:10:44.066 00:10:44.066 real 0m0.602s 00:10:44.066 user 0m0.218s 00:10:44.066 sys 0m0.273s 00:10:44.066 17:16:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.066 17:16:54 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:44.066 ************************************ 00:10:44.066 END TEST nvme_multi_aen 00:10:44.066 ************************************ 00:10:44.066 17:16:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:44.066 17:16:54 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:44.066 17:16:54 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:44.066 17:16:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.066 17:16:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:44.066 ************************************ 00:10:44.066 START TEST nvme_startup 00:10:44.066 ************************************ 00:10:44.066 17:16:54 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:44.324 Initializing NVMe Controllers 00:10:44.324 Attached to 0000:00:10.0 00:10:44.324 Attached to 0000:00:11.0 00:10:44.324 Attached to 0000:00:13.0 00:10:44.324 Attached to 0000:00:12.0 00:10:44.324 Initialization complete. 00:10:44.324 Time used:209306.922 (us). 00:10:44.324 00:10:44.324 real 0m0.305s 00:10:44.324 user 0m0.100s 00:10:44.324 sys 0m0.154s 00:10:44.324 17:16:55 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.324 17:16:55 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:44.324 ************************************ 00:10:44.324 END TEST nvme_startup 00:10:44.324 ************************************ 00:10:44.324 17:16:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:44.324 17:16:55 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:44.324 17:16:55 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:44.324 17:16:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.324 17:16:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:44.324 ************************************ 00:10:44.324 START TEST nvme_multi_secondary 00:10:44.324 ************************************ 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=83022 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=83023 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:44.324 17:16:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:47.655 Initializing NVMe Controllers 00:10:47.655 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.655 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.655 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.655 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.655 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:47.655 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:47.655 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:47.655 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:47.655 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:47.655 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:47.655 Initialization complete. Launching workers. 00:10:47.655 ======================================================== 00:10:47.655 Latency(us) 00:10:47.655 Device Information : IOPS MiB/s Average min max 00:10:47.655 PCIE (0000:00:10.0) NSID 1 from core 2: 2177.62 8.51 7343.58 1391.82 16394.20 00:10:47.655 PCIE (0000:00:11.0) NSID 1 from core 2: 2177.62 8.51 7348.15 1448.77 16170.97 00:10:47.655 PCIE (0000:00:13.0) NSID 1 from core 2: 2177.62 8.51 7358.11 1464.16 16768.55 00:10:47.655 PCIE (0000:00:12.0) NSID 1 from core 2: 2177.62 8.51 7358.57 1358.91 14957.06 00:10:47.655 PCIE (0000:00:12.0) NSID 2 from core 2: 2177.62 8.51 7359.25 1379.62 15288.28 00:10:47.655 PCIE (0000:00:12.0) NSID 3 from core 2: 2177.62 8.51 7359.27 1421.40 15800.58 00:10:47.655 ======================================================== 00:10:47.655 Total : 13065.73 51.04 7354.49 1358.91 16768.55 00:10:47.655 00:10:47.932 Initializing NVMe Controllers 00:10:47.932 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.932 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.932 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.932 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.932 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:47.932 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:47.932 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:47.932 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:47.932 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:47.932 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:47.932 Initialization complete. Launching workers. 00:10:47.932 ======================================================== 00:10:47.932 Latency(us) 00:10:47.932 Device Information : IOPS MiB/s Average min max 00:10:47.932 PCIE (0000:00:10.0) NSID 1 from core 1: 4688.48 18.31 3410.37 1039.45 7934.05 00:10:47.932 PCIE (0000:00:11.0) NSID 1 from core 1: 4688.48 18.31 3411.99 1076.05 7950.69 00:10:47.932 PCIE (0000:00:13.0) NSID 1 from core 1: 4688.48 18.31 3411.97 1069.65 8379.36 00:10:47.932 PCIE (0000:00:12.0) NSID 1 from core 1: 4688.48 18.31 3411.88 1086.37 8704.24 00:10:47.932 PCIE (0000:00:12.0) NSID 2 from core 1: 4688.48 18.31 3411.71 1104.22 9100.22 00:10:47.932 PCIE (0000:00:12.0) NSID 3 from core 1: 4688.48 18.31 3411.61 1073.52 8312.33 00:10:47.932 ======================================================== 00:10:47.932 Total : 28130.88 109.89 3411.59 1039.45 9100.22 00:10:47.932 00:10:47.932 17:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 83022 00:10:50.459 Initializing NVMe Controllers 00:10:50.459 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:50.459 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:50.459 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:50.459 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:50.459 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:50.459 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:50.459 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:50.459 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:50.459 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:50.459 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:50.459 Initialization complete. Launching workers. 00:10:50.459 ======================================================== 00:10:50.459 Latency(us) 00:10:50.459 Device Information : IOPS MiB/s Average min max 00:10:50.459 PCIE (0000:00:10.0) NSID 1 from core 0: 8098.79 31.64 1973.81 975.77 6139.40 00:10:50.459 PCIE (0000:00:11.0) NSID 1 from core 0: 8098.79 31.64 1974.97 997.87 5918.62 00:10:50.459 PCIE (0000:00:13.0) NSID 1 from core 0: 8098.79 31.64 1974.84 891.83 6070.25 00:10:50.459 PCIE (0000:00:12.0) NSID 1 from core 0: 8098.59 31.64 1974.76 820.05 6379.88 00:10:50.459 PCIE (0000:00:12.0) NSID 2 from core 0: 8098.79 31.64 1974.56 736.98 6788.93 00:10:50.459 PCIE (0000:00:12.0) NSID 3 from core 0: 8098.79 31.64 1974.43 657.40 7483.26 00:10:50.459 ======================================================== 00:10:50.459 Total : 48592.52 189.81 1974.56 657.40 7483.26 00:10:50.459 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 83023 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=83099 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=83100 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:50.459 17:17:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:53.742 Initializing NVMe Controllers 00:10:53.742 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:53.742 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:53.742 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:53.742 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:53.742 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:53.742 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:53.742 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:53.742 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:53.742 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:53.742 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:53.742 Initialization complete. Launching workers. 00:10:53.743 ======================================================== 00:10:53.743 Latency(us) 00:10:53.743 Device Information : IOPS MiB/s Average min max 00:10:53.743 PCIE (0000:00:10.0) NSID 1 from core 0: 4895.68 19.12 3266.05 1122.03 8197.90 00:10:53.743 PCIE (0000:00:11.0) NSID 1 from core 0: 4895.68 19.12 3267.72 1165.87 8101.90 00:10:53.743 PCIE (0000:00:13.0) NSID 1 from core 0: 4895.68 19.12 3267.73 1158.26 7290.02 00:10:53.743 PCIE (0000:00:12.0) NSID 1 from core 0: 4895.68 19.12 3267.76 1135.49 7590.45 00:10:53.743 PCIE (0000:00:12.0) NSID 2 from core 0: 4895.68 19.12 3267.88 1152.31 7683.95 00:10:53.743 PCIE (0000:00:12.0) NSID 3 from core 0: 4895.68 19.12 3268.03 1131.12 7339.43 00:10:53.743 ======================================================== 00:10:53.743 Total : 29374.07 114.74 3267.53 1122.03 8197.90 00:10:53.743 00:10:53.743 Initializing NVMe Controllers 00:10:53.743 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:53.743 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:53.743 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:53.743 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:53.743 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:53.743 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:53.743 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:53.743 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:53.743 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:53.743 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:53.743 Initialization complete. Launching workers. 00:10:53.743 ======================================================== 00:10:53.743 Latency(us) 00:10:53.743 Device Information : IOPS MiB/s Average min max 00:10:53.743 PCIE (0000:00:10.0) NSID 1 from core 1: 4862.24 18.99 3288.42 1065.60 8007.63 00:10:53.743 PCIE (0000:00:11.0) NSID 1 from core 1: 4862.24 18.99 3289.89 1096.54 8221.49 00:10:53.743 PCIE (0000:00:13.0) NSID 1 from core 1: 4862.24 18.99 3289.66 1102.20 8438.26 00:10:53.743 PCIE (0000:00:12.0) NSID 1 from core 1: 4862.24 18.99 3289.45 1100.00 8718.27 00:10:53.743 PCIE (0000:00:12.0) NSID 2 from core 1: 4862.24 18.99 3289.25 905.50 8178.36 00:10:53.743 PCIE (0000:00:12.0) NSID 3 from core 1: 4862.24 18.99 3289.06 499.26 7726.12 00:10:53.743 ======================================================== 00:10:53.743 Total : 29173.43 113.96 3289.29 499.26 8718.27 00:10:53.743 00:10:55.644 Initializing NVMe Controllers 00:10:55.644 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:55.644 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:55.644 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:55.644 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:55.644 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:55.644 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:55.644 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:55.644 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:55.644 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:55.644 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:55.644 Initialization complete. Launching workers. 00:10:55.644 ======================================================== 00:10:55.644 Latency(us) 00:10:55.644 Device Information : IOPS MiB/s Average min max 00:10:55.644 PCIE (0000:00:10.0) NSID 1 from core 2: 3273.01 12.79 4886.18 1032.60 15577.23 00:10:55.644 PCIE (0000:00:11.0) NSID 1 from core 2: 3273.01 12.79 4887.69 1036.65 14809.61 00:10:55.644 PCIE (0000:00:13.0) NSID 1 from core 2: 3273.01 12.79 4887.29 1068.12 16711.43 00:10:55.644 PCIE (0000:00:12.0) NSID 1 from core 2: 3273.01 12.79 4887.37 1065.33 15047.29 00:10:55.644 PCIE (0000:00:12.0) NSID 2 from core 2: 3273.01 12.79 4890.87 1073.61 15558.38 00:10:55.644 PCIE (0000:00:12.0) NSID 3 from core 2: 3273.01 12.79 4891.14 1056.09 18713.77 00:10:55.644 ======================================================== 00:10:55.644 Total : 19638.07 76.71 4888.43 1032.60 18713.77 00:10:55.644 00:10:55.644 17:17:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 83099 00:10:55.644 17:17:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 83100 00:10:55.644 00:10:55.644 real 0m11.230s 00:10:55.644 user 0m18.462s 00:10:55.644 sys 0m1.007s 00:10:55.644 17:17:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.644 ************************************ 00:10:55.644 17:17:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:55.644 END TEST nvme_multi_secondary 00:10:55.644 ************************************ 00:10:55.644 17:17:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:55.644 17:17:06 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:55.644 17:17:06 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:55.644 17:17:06 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/82036 ]] 00:10:55.644 17:17:06 nvme -- common/autotest_common.sh@1088 -- # kill 82036 00:10:55.644 17:17:06 nvme -- common/autotest_common.sh@1089 -- # wait 82036 00:10:55.644 [2024-07-15 17:17:06.453988] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.454088] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.454152] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.454183] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.455203] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.455286] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.455326] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.455373] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.456244] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.456341] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.456646] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.456687] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.457716] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.457795] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.457838] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.644 [2024-07-15 17:17:06.457870] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 82965) is not found. Dropping the request. 00:10:55.902 17:17:06 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:10:55.902 17:17:06 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:10:55.902 17:17:06 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:55.902 17:17:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.902 17:17:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.902 17:17:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 ************************************ 00:10:55.902 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:55.902 ************************************ 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:55.902 * Looking for test storage... 00:10:55.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:55.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=83259 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 83259 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 83259 ']' 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.902 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.903 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.903 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.903 17:17:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:56.160 [2024-07-15 17:17:06.864293] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:10:56.160 [2024-07-15 17:17:06.864827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83259 ] 00:10:56.418 [2024-07-15 17:17:07.041209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:56.418 [2024-07-15 17:17:07.063880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.418 [2024-07-15 17:17:07.169811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.418 [2024-07-15 17:17:07.169949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.418 [2024-07-15 17:17:07.170047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.418 [2024-07-15 17:17:07.170093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.983 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.983 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:10:56.983 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:56.983 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.983 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:57.242 nvme0n1 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_10YAm.txt 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:57.242 true 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721063827 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=83282 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:57.242 17:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.180 [2024-07-15 17:17:09.918965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:59.180 [2024-07-15 17:17:09.919356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:59.180 [2024-07-15 17:17:09.919421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:59.180 [2024-07-15 17:17:09.919445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.180 [2024-07-15 17:17:09.921787] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:59.180 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 83282 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 83282 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 83282 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:59.180 17:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_10YAm.txt 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:59.180 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:59.181 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_10YAm.txt 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 83259 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 83259 ']' 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 83259 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83259 00:10:59.440 killing process with pid 83259 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83259' 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 83259 00:10:59.440 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 83259 00:10:59.698 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:59.698 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:59.698 ************************************ 00:10:59.698 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:59.698 ************************************ 00:10:59.698 00:10:59.698 real 0m3.948s 00:10:59.698 user 0m13.752s 00:10:59.698 sys 0m0.690s 00:10:59.698 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.698 17:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:59.957 17:17:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:59.957 17:17:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:59.957 17:17:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:59.957 17:17:10 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:59.957 17:17:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.957 17:17:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.957 ************************************ 00:10:59.957 START TEST nvme_fio 00:10:59.957 ************************************ 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:59.957 17:17:10 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:59.957 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:00.216 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:00.216 17:17:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:00.474 17:17:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:00.474 17:17:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:00.474 17:17:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:00.733 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:00.733 fio-3.35 00:11:00.733 Starting 1 thread 00:11:04.017 00:11:04.017 test: (groupid=0, jobs=1): err= 0: pid=83407: Mon Jul 15 17:17:14 2024 00:11:04.017 read: IOPS=14.9k, BW=58.2MiB/s (61.1MB/s)(117MiB/2001msec) 00:11:04.017 slat (usec): min=4, max=105, avg= 7.30, stdev= 2.43 00:11:04.017 clat (usec): min=701, max=9782, avg=4276.86, stdev=686.39 00:11:04.017 lat (usec): min=722, max=9787, avg=4284.16, stdev=687.25 00:11:04.017 clat percentiles (usec): 00:11:04.017 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3687], 00:11:04.017 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:04.017 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5080], 00:11:04.017 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8717], 00:11:04.017 | 99.99th=[ 9634] 00:11:04.017 bw ( KiB/s): min=55672, max=58840, per=96.29%, avg=57413.33, stdev=1607.27, samples=3 00:11:04.017 iops : min=13918, max=14710, avg=14353.33, stdev=401.82, samples=3 00:11:04.017 write: IOPS=14.9k, BW=58.2MiB/s (61.1MB/s)(117MiB/2001msec); 0 zone resets 00:11:04.017 slat (usec): min=4, max=116, avg= 7.44, stdev= 2.61 00:11:04.017 clat (usec): min=833, max=9863, avg=4282.02, stdev=683.42 00:11:04.017 lat (usec): min=847, max=9869, avg=4289.45, stdev=684.28 00:11:04.017 clat percentiles (usec): 00:11:04.017 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3687], 00:11:04.017 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:04.017 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5080], 00:11:04.017 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 8586], 00:11:04.017 | 99.99th=[ 9503] 00:11:04.017 bw ( KiB/s): min=55952, max=58360, per=96.07%, avg=57301.33, stdev=1230.03, samples=3 00:11:04.017 iops : min=13988, max=14590, avg=14325.33, stdev=307.51, samples=3 00:11:04.017 lat (usec) : 750=0.01%, 1000=0.01% 00:11:04.017 lat (msec) : 2=0.03%, 4=29.63%, 10=70.34% 00:11:04.017 cpu : usr=98.90%, sys=0.05%, ctx=5, majf=0, minf=625 00:11:04.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:04.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.017 issued rwts: total=29826,29837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.017 00:11:04.017 Run status group 0 (all jobs): 00:11:04.017 READ: bw=58.2MiB/s (61.1MB/s), 58.2MiB/s-58.2MiB/s (61.1MB/s-61.1MB/s), io=117MiB (122MB), run=2001-2001msec 00:11:04.017 WRITE: bw=58.2MiB/s (61.1MB/s), 58.2MiB/s-58.2MiB/s (61.1MB/s-61.1MB/s), io=117MiB (122MB), run=2001-2001msec 00:11:04.017 ----------------------------------------------------- 00:11:04.017 Suppressions used: 00:11:04.017 count bytes template 00:11:04.017 1 32 /usr/src/fio/parse.c 00:11:04.017 1 8 libtcmalloc_minimal.so 00:11:04.017 ----------------------------------------------------- 00:11:04.017 00:11:04.017 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:04.017 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:04.017 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:04.017 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:04.275 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:04.275 17:17:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:04.532 17:17:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:04.532 17:17:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:04.532 17:17:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:04.789 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:04.789 fio-3.35 00:11:04.789 Starting 1 thread 00:11:08.063 00:11:08.063 test: (groupid=0, jobs=1): err= 0: pid=83473: Mon Jul 15 17:17:18 2024 00:11:08.063 read: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(125MiB/2001msec) 00:11:08.063 slat (usec): min=5, max=560, avg= 6.96, stdev= 3.88 00:11:08.063 clat (usec): min=325, max=10937, avg=3982.40, stdev=552.01 00:11:08.063 lat (usec): min=331, max=10995, avg=3989.36, stdev=552.70 00:11:08.063 clat percentiles (usec): 00:11:08.063 | 1.00th=[ 3261], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3654], 00:11:08.063 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:08.063 | 70.00th=[ 4015], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4883], 00:11:08.063 | 99.00th=[ 5997], 99.50th=[ 6718], 99.90th=[ 7832], 99.95th=[ 9765], 00:11:08.063 | 99.99th=[10814] 00:11:08.063 bw ( KiB/s): min=58048, max=67432, per=98.02%, avg=62698.67, stdev=4692.55, samples=3 00:11:08.063 iops : min=14512, max=16858, avg=15674.67, stdev=1173.14, samples=3 00:11:08.063 write: IOPS=16.0k, BW=62.6MiB/s (65.6MB/s)(125MiB/2001msec); 0 zone resets 00:11:08.063 slat (usec): min=5, max=732, avg= 7.17, stdev= 4.71 00:11:08.063 clat (usec): min=277, max=10816, avg=3987.10, stdev=549.88 00:11:08.063 lat (usec): min=284, max=10835, avg=3994.27, stdev=550.51 00:11:08.063 clat percentiles (usec): 00:11:08.063 | 1.00th=[ 3261], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:11:08.063 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:11:08.063 | 70.00th=[ 4015], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 4883], 00:11:08.063 | 99.00th=[ 5997], 99.50th=[ 6783], 99.90th=[ 8291], 99.95th=[ 9765], 00:11:08.063 | 99.99th=[10683] 00:11:08.063 bw ( KiB/s): min=57424, max=66416, per=97.16%, avg=62282.67, stdev=4539.67, samples=3 00:11:08.063 iops : min=14356, max=16604, avg=15570.67, stdev=1134.92, samples=3 00:11:08.063 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:11:08.063 lat (msec) : 2=0.06%, 4=68.45%, 10=31.41%, 20=0.03% 00:11:08.063 cpu : usr=98.30%, sys=0.45%, ctx=21, majf=0, minf=625 00:11:08.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:08.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.063 issued rwts: total=31998,32066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.063 00:11:08.063 Run status group 0 (all jobs): 00:11:08.063 READ: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:08.063 WRITE: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:11:08.063 ----------------------------------------------------- 00:11:08.063 Suppressions used: 00:11:08.063 count bytes template 00:11:08.063 1 32 /usr/src/fio/parse.c 00:11:08.063 1 8 libtcmalloc_minimal.so 00:11:08.063 ----------------------------------------------------- 00:11:08.063 00:11:08.063 17:17:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:08.063 17:17:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:08.063 17:17:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:08.063 17:17:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.321 17:17:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:08.321 17:17:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.578 17:17:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.578 17:17:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:08.578 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:08.835 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.835 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.835 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:08.835 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.835 17:17:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:08.835 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.835 fio-3.35 00:11:08.835 Starting 1 thread 00:11:12.127 00:11:12.127 test: (groupid=0, jobs=1): err= 0: pid=83534: Mon Jul 15 17:17:22 2024 00:11:12.127 read: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec) 00:11:12.127 slat (usec): min=5, max=778, avg= 7.25, stdev= 5.01 00:11:12.127 clat (usec): min=360, max=10864, avg=4171.91, stdev=672.19 00:11:12.127 lat (usec): min=368, max=10982, avg=4179.16, stdev=673.13 00:11:12.127 clat percentiles (usec): 00:11:12.128 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:11:12.128 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4015], 00:11:12.128 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5669], 00:11:12.128 | 99.00th=[ 6456], 99.50th=[ 7308], 99.90th=[ 9372], 99.95th=[ 9503], 00:11:12.128 | 99.99th=[10552] 00:11:12.128 bw ( KiB/s): min=57224, max=65301, per=100.00%, avg=61695.00, stdev=4107.39, samples=3 00:11:12.128 iops : min=14306, max=16325, avg=15423.67, stdev=1026.74, samples=3 00:11:12.128 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec); 0 zone resets 00:11:12.128 slat (usec): min=5, max=186, avg= 7.48, stdev= 2.50 00:11:12.128 clat (usec): min=262, max=10661, avg=4184.91, stdev=679.30 00:11:12.128 lat (usec): min=271, max=10683, avg=4192.39, stdev=680.19 00:11:12.128 clat percentiles (usec): 00:11:12.128 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:11:12.128 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4015], 00:11:12.128 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5735], 00:11:12.128 | 99.00th=[ 6456], 99.50th=[ 7439], 99.90th=[ 9372], 99.95th=[ 9503], 00:11:12.128 | 99.99th=[10421] 00:11:12.128 bw ( KiB/s): min=56360, max=65780, per=100.00%, avg=61289.33, stdev=4725.30, samples=3 00:11:12.128 iops : min=14090, max=16445, avg=15322.33, stdev=1181.32, samples=3 00:11:12.128 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:12.128 lat (msec) : 2=0.05%, 4=58.58%, 10=41.31%, 20=0.02% 00:11:12.128 cpu : usr=98.35%, sys=0.30%, ctx=26, majf=0, minf=624 00:11:12.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:12.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.128 issued rwts: total=30523,30569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.128 00:11:12.128 Run status group 0 (all jobs): 00:11:12.128 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:11:12.128 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:11:12.386 ----------------------------------------------------- 00:11:12.386 Suppressions used: 00:11:12.386 count bytes template 00:11:12.386 1 32 /usr/src/fio/parse.c 00:11:12.386 1 8 libtcmalloc_minimal.so 00:11:12.386 ----------------------------------------------------- 00:11:12.386 00:11:12.386 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.386 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:12.386 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:12.386 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:12.644 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:12.644 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:12.902 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:12.902 17:17:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:12.902 17:17:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:13.160 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:13.160 fio-3.35 00:11:13.160 Starting 1 thread 00:11:17.346 00:11:17.346 test: (groupid=0, jobs=1): err= 0: pid=83602: Mon Jul 15 17:17:27 2024 00:11:17.346 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec) 00:11:17.346 slat (nsec): min=4740, max=54865, avg=7252.66, stdev=2195.12 00:11:17.346 clat (usec): min=302, max=11046, avg=4337.00, stdev=631.75 00:11:17.346 lat (usec): min=309, max=11101, avg=4344.25, stdev=632.46 00:11:17.346 clat percentiles (usec): 00:11:17.346 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3916], 00:11:17.346 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:17.346 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5014], 00:11:17.346 | 99.00th=[ 7242], 99.50th=[ 7701], 99.90th=[ 8455], 99.95th=[ 9634], 00:11:17.346 | 99.99th=[10945] 00:11:17.346 bw ( KiB/s): min=54760, max=62464, per=98.60%, avg=57938.67, stdev=4024.68, samples=3 00:11:17.346 iops : min=13690, max=15616, avg=14484.67, stdev=1006.17, samples=3 00:11:17.346 write: IOPS=14.7k, BW=57.5MiB/s (60.2MB/s)(115MiB/2001msec); 0 zone resets 00:11:17.346 slat (usec): min=4, max=493, avg= 7.43, stdev= 3.60 00:11:17.346 clat (usec): min=357, max=10856, avg=4342.78, stdev=641.56 00:11:17.346 lat (usec): min=365, max=10874, avg=4350.21, stdev=642.29 00:11:17.346 clat percentiles (usec): 00:11:17.346 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3884], 00:11:17.346 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:11:17.346 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5014], 00:11:17.346 | 99.00th=[ 7373], 99.50th=[ 7767], 99.90th=[ 8455], 99.95th=[ 9765], 00:11:17.346 | 99.99th=[10683] 00:11:17.346 bw ( KiB/s): min=55088, max=62008, per=98.28%, avg=57821.33, stdev=3681.81, samples=3 00:11:17.346 iops : min=13772, max=15502, avg=14455.33, stdev=920.45, samples=3 00:11:17.346 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:11:17.346 lat (msec) : 2=0.05%, 4=22.63%, 10=77.25%, 20=0.04% 00:11:17.346 cpu : usr=98.95%, sys=0.00%, ctx=4, majf=0, minf=623 00:11:17.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.346 issued rwts: total=29396,29432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.346 00:11:17.346 Run status group 0 (all jobs): 00:11:17.346 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (120MB), run=2001-2001msec 00:11:17.346 WRITE: bw=57.5MiB/s (60.2MB/s), 57.5MiB/s-57.5MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2001-2001msec 00:11:17.346 ----------------------------------------------------- 00:11:17.346 Suppressions used: 00:11:17.346 count bytes template 00:11:17.346 1 32 /usr/src/fio/parse.c 00:11:17.346 1 8 libtcmalloc_minimal.so 00:11:17.346 ----------------------------------------------------- 00:11:17.346 00:11:17.346 17:17:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:17.346 17:17:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:17.346 00:11:17.346 real 0m16.981s 00:11:17.346 user 0m13.478s 00:11:17.346 sys 0m2.052s 00:11:17.346 17:17:27 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.347 ************************************ 00:11:17.347 END TEST nvme_fio 00:11:17.347 ************************************ 00:11:17.347 17:17:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:17.347 17:17:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:17.347 ************************************ 00:11:17.347 END TEST nvme 00:11:17.347 ************************************ 00:11:17.347 00:11:17.347 real 1m28.753s 00:11:17.347 user 3m36.811s 00:11:17.347 sys 0m14.939s 00:11:17.347 17:17:27 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.347 17:17:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.347 17:17:27 -- common/autotest_common.sh@1142 -- # return 0 00:11:17.347 17:17:27 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:17.347 17:17:27 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:17.347 17:17:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.347 17:17:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.347 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:11:17.347 ************************************ 00:11:17.347 START TEST nvme_scc 00:11:17.347 ************************************ 00:11:17.347 17:17:27 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:17.347 * Looking for test storage... 00:11:17.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:17.347 17:17:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.347 17:17:27 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.347 17:17:27 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.347 17:17:27 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.347 17:17:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.347 17:17:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.347 17:17:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.347 17:17:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:17.347 17:17:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:17.347 17:17:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:17.347 17:17:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.347 17:17:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:17.347 17:17:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:17.347 17:17:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:17.347 17:17:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.605 Waiting for block devices as requested 00:11:17.605 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.605 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.862 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.862 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.129 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.129 17:17:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:23.129 17:17:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:23.129 17:17:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.129 17:17:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:23.129 17:17:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:23.129 17:17:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:23.129 17:17:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:23.129 17:17:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.129 17:17:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.130 17:17:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:23.130 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:23.131 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.132 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.133 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.134 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.135 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.136 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:23.137 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:23.138 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.139 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:23.140 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:23.141 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.142 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.143 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.144 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.145 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:23.146 17:17:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:23.146 17:17:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:23.146 17:17:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.147 17:17:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.147 17:17:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.147 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.148 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.149 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:23.150 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.151 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.152 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:23.153 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.154 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:23.155 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:23.156 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.157 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.158 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:23.159 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.160 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:23.427 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:23.428 17:17:34 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:23.428 17:17:34 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.428 17:17:34 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.428 17:17:34 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:23.428 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.429 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.430 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.431 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.432 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:23.433 17:17:34 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:23.433 17:17:34 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.433 17:17:34 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:23.433 17:17:34 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.433 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.693 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.694 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:23.695 17:17:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:23.695 17:17:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:23.696 17:17:34 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:23.696 17:17:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:23.696 17:17:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:23.696 17:17:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.857 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.857 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.857 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.857 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.857 17:17:35 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:24.857 17:17:35 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:24.857 17:17:35 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.857 17:17:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:24.857 ************************************ 00:11:24.857 START TEST nvme_simple_copy 00:11:24.857 ************************************ 00:11:24.857 17:17:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:25.144 Initializing NVMe Controllers 00:11:25.144 Attaching to 0000:00:10.0 00:11:25.144 Controller supports SCC. Attached to 0000:00:10.0 00:11:25.144 Namespace ID: 1 size: 6GB 00:11:25.144 Initialization complete. 00:11:25.144 00:11:25.144 Controller QEMU NVMe Ctrl (12340 ) 00:11:25.144 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:25.144 Namespace Block Size:4096 00:11:25.144 Writing LBAs 0 to 63 with Random Data 00:11:25.144 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:25.144 LBAs matching Written Data: 64 00:11:25.144 00:11:25.144 real 0m0.317s 00:11:25.144 user 0m0.124s 00:11:25.144 sys 0m0.089s 00:11:25.144 17:17:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.144 ************************************ 00:11:25.144 END TEST nvme_simple_copy 00:11:25.144 ************************************ 00:11:25.144 17:17:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:25.144 17:17:35 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:11:25.144 ************************************ 00:11:25.144 END TEST nvme_scc 00:11:25.144 ************************************ 00:11:25.144 00:11:25.144 real 0m8.316s 00:11:25.144 user 0m1.440s 00:11:25.144 sys 0m1.689s 00:11:25.144 17:17:35 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.144 17:17:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:25.402 17:17:36 -- common/autotest_common.sh@1142 -- # return 0 00:11:25.402 17:17:36 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:25.402 17:17:36 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:25.402 17:17:36 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:25.402 17:17:36 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:25.402 17:17:36 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:25.402 17:17:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:25.402 17:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.402 17:17:36 -- common/autotest_common.sh@10 -- # set +x 00:11:25.402 ************************************ 00:11:25.402 START TEST nvme_fdp 00:11:25.402 ************************************ 00:11:25.402 17:17:36 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:11:25.402 * Looking for test storage... 00:11:25.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:25.402 17:17:36 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.402 17:17:36 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.402 17:17:36 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.402 17:17:36 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.402 17:17:36 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.402 17:17:36 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.402 17:17:36 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.402 17:17:36 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:25.402 17:17:36 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:25.402 17:17:36 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:25.402 17:17:36 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.402 17:17:36 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:25.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.919 Waiting for block devices as requested 00:11:25.919 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.176 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.176 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.443 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:31.443 17:17:42 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:31.443 17:17:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:31.443 17:17:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:31.443 17:17:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.443 17:17:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.443 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:31.444 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.445 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.446 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:31.447 17:17:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:31.447 17:17:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:31.447 17:17:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:31.448 17:17:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.448 17:17:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.448 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:31.449 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:31.450 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:31.451 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:31.722 17:17:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:31.722 17:17:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:31.722 17:17:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.722 17:17:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.722 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.723 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.724 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:31.725 17:17:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.726 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.729 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.730 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:31.731 17:17:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:31.731 17:17:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:31.731 17:17:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.731 17:17:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:31.731 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:31.732 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:31.733 17:17:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:31.733 17:17:42 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:31.733 17:17:42 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.866 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.866 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.866 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.866 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.124 17:17:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:33.124 17:17:43 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:33.124 17:17:43 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.124 17:17:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:33.124 ************************************ 00:11:33.124 START TEST nvme_flexible_data_placement 00:11:33.124 ************************************ 00:11:33.124 17:17:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:33.383 Initializing NVMe Controllers 00:11:33.383 Attaching to 0000:00:13.0 00:11:33.383 Controller supports FDP Attached to 0000:00:13.0 00:11:33.383 Namespace ID: 1 Endurance Group ID: 1 00:11:33.383 Initialization complete. 00:11:33.383 00:11:33.383 ================================== 00:11:33.383 == FDP tests for Namespace: #01 == 00:11:33.383 ================================== 00:11:33.383 00:11:33.383 Get Feature: FDP: 00:11:33.383 ================= 00:11:33.383 Enabled: Yes 00:11:33.383 FDP configuration Index: 0 00:11:33.383 00:11:33.383 FDP configurations log page 00:11:33.383 =========================== 00:11:33.383 Number of FDP configurations: 1 00:11:33.383 Version: 0 00:11:33.383 Size: 112 00:11:33.383 FDP Configuration Descriptor: 0 00:11:33.383 Descriptor Size: 96 00:11:33.383 Reclaim Group Identifier format: 2 00:11:33.383 FDP Volatile Write Cache: Not Present 00:11:33.383 FDP Configuration: Valid 00:11:33.383 Vendor Specific Size: 0 00:11:33.383 Number of Reclaim Groups: 2 00:11:33.383 Number of Recalim Unit Handles: 8 00:11:33.383 Max Placement Identifiers: 128 00:11:33.383 Number of Namespaces Suppprted: 256 00:11:33.383 Reclaim unit Nominal Size: 6000000 bytes 00:11:33.383 Estimated Reclaim Unit Time Limit: Not Reported 00:11:33.383 RUH Desc #000: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #001: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #002: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #003: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #004: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #005: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #006: RUH Type: Initially Isolated 00:11:33.383 RUH Desc #007: RUH Type: Initially Isolated 00:11:33.383 00:11:33.383 FDP reclaim unit handle usage log page 00:11:33.383 ====================================== 00:11:33.383 Number of Reclaim Unit Handles: 8 00:11:33.383 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:33.383 RUH Usage Desc #001: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #002: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #003: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #004: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #005: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #006: RUH Attributes: Unused 00:11:33.383 RUH Usage Desc #007: RUH Attributes: Unused 00:11:33.383 00:11:33.383 FDP statistics log page 00:11:33.383 ======================= 00:11:33.383 Host bytes with metadata written: 1317220352 00:11:33.383 Media bytes with metadata written: 1317400576 00:11:33.383 Media bytes erased: 0 00:11:33.383 00:11:33.383 FDP Reclaim unit handle status 00:11:33.383 ============================== 00:11:33.383 Number of RUHS descriptors: 2 00:11:33.383 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000057cd 00:11:33.383 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:33.383 00:11:33.383 FDP write on placement id: 0 success 00:11:33.383 00:11:33.383 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:33.383 00:11:33.383 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:33.383 00:11:33.383 Get Feature: FDP Events for Placement handle: #0 00:11:33.383 ======================== 00:11:33.383 Number of FDP Events: 6 00:11:33.383 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:33.383 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:33.383 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:33.383 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:33.383 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:33.383 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:33.383 00:11:33.383 FDP events log page 00:11:33.383 =================== 00:11:33.383 Number of FDP events: 1 00:11:33.383 FDP Event #0: 00:11:33.383 Event Type: RU Not Written to Capacity 00:11:33.383 Placement Identifier: Valid 00:11:33.383 NSID: Valid 00:11:33.383 Location: Valid 00:11:33.383 Placement Identifier: 0 00:11:33.383 Event Timestamp: 4 00:11:33.383 Namespace Identifier: 1 00:11:33.383 Reclaim Group Identifier: 0 00:11:33.383 Reclaim Unit Handle Identifier: 0 00:11:33.383 00:11:33.383 FDP test passed 00:11:33.383 00:11:33.383 real 0m0.273s 00:11:33.383 user 0m0.086s 00:11:33.383 sys 0m0.085s 00:11:33.383 17:17:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.383 17:17:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:33.383 ************************************ 00:11:33.383 END TEST nvme_flexible_data_placement 00:11:33.383 ************************************ 00:11:33.383 17:17:44 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.383 ************************************ 00:11:33.383 END TEST nvme_fdp 00:11:33.383 ************************************ 00:11:33.383 00:11:33.383 real 0m8.095s 00:11:33.383 user 0m1.285s 00:11:33.383 sys 0m1.788s 00:11:33.383 17:17:44 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.383 17:17:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:33.383 17:17:44 -- common/autotest_common.sh@1142 -- # return 0 00:11:33.383 17:17:44 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:33.383 17:17:44 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:33.383 17:17:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.383 17:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.383 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.383 ************************************ 00:11:33.383 START TEST nvme_rpc 00:11:33.383 ************************************ 00:11:33.383 17:17:44 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:33.642 * Looking for test storage... 00:11:33.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=84942 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:33.642 17:17:44 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 84942 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 84942 ']' 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.642 17:17:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.642 [2024-07-15 17:17:44.463554] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:11:33.642 [2024-07-15 17:17:44.463761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84942 ] 00:11:33.901 [2024-07-15 17:17:44.616040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:33.901 [2024-07-15 17:17:44.641929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:33.901 [2024-07-15 17:17:44.748397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.901 [2024-07-15 17:17:44.748444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.835 17:17:45 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.835 17:17:45 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:34.835 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:35.093 Nvme0n1 00:11:35.093 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:35.093 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:35.351 request: 00:11:35.351 { 00:11:35.351 "bdev_name": "Nvme0n1", 00:11:35.351 "filename": "non_existing_file", 00:11:35.351 "method": "bdev_nvme_apply_firmware", 00:11:35.351 "req_id": 1 00:11:35.351 } 00:11:35.351 Got JSON-RPC error response 00:11:35.351 response: 00:11:35.351 { 00:11:35.351 "code": -32603, 00:11:35.351 "message": "open file failed." 00:11:35.351 } 00:11:35.351 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:35.351 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:35.351 17:17:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:35.609 17:17:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:35.609 17:17:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 84942 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 84942 ']' 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 84942 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84942 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.609 killing process with pid 84942 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84942' 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@967 -- # kill 84942 00:11:35.609 17:17:46 nvme_rpc -- common/autotest_common.sh@972 -- # wait 84942 00:11:36.177 00:11:36.177 real 0m2.599s 00:11:36.177 user 0m5.051s 00:11:36.177 sys 0m0.705s 00:11:36.177 17:17:46 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.177 17:17:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.177 ************************************ 00:11:36.177 END TEST nvme_rpc 00:11:36.177 ************************************ 00:11:36.177 17:17:46 -- common/autotest_common.sh@1142 -- # return 0 00:11:36.177 17:17:46 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:36.177 17:17:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:36.177 17:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.177 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:11:36.177 ************************************ 00:11:36.177 START TEST nvme_rpc_timeouts 00:11:36.177 ************************************ 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:36.177 * Looking for test storage... 00:11:36.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_85002 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_85002 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=85026 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:36.177 17:17:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 85026 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 85026 ']' 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.177 17:17:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:36.436 [2024-07-15 17:17:47.035371] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:11:36.436 [2024-07-15 17:17:47.035602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ] 00:11:36.436 [2024-07-15 17:17:47.188092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:36.436 [2024-07-15 17:17:47.203958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.694 [2024-07-15 17:17:47.302189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.694 [2024-07-15 17:17:47.302252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.262 17:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.262 17:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:11:37.262 Checking default timeout settings: 00:11:37.262 17:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:37.262 17:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:37.830 Making settings changes with rpc: 00:11:37.830 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:37.830 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:37.830 Check default vs. modified settings: 00:11:37.830 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:37.830 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:38.397 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:38.397 17:17:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:38.397 Setting action_on_timeout is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:38.397 Setting timeout_us is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:38.397 Setting timeout_admin_us is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_85002 /tmp/settings_modified_85002 00:11:38.397 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 85026 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 85026 ']' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 85026 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85026 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.397 killing process with pid 85026 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85026' 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 85026 00:11:38.397 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 85026 00:11:38.964 RPC TIMEOUT SETTING TEST PASSED. 00:11:38.964 17:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:38.964 00:11:38.964 real 0m2.700s 00:11:38.964 user 0m5.463s 00:11:38.964 sys 0m0.656s 00:11:38.964 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.964 ************************************ 00:11:38.964 END TEST nvme_rpc_timeouts 00:11:38.964 17:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:38.964 ************************************ 00:11:38.964 17:17:49 -- common/autotest_common.sh@1142 -- # return 0 00:11:38.964 17:17:49 -- spdk/autotest.sh@243 -- # uname -s 00:11:38.964 17:17:49 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:38.964 17:17:49 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.964 17:17:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:38.964 17:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.964 17:17:49 -- common/autotest_common.sh@10 -- # set +x 00:11:38.964 ************************************ 00:11:38.964 START TEST sw_hotplug 00:11:38.964 ************************************ 00:11:38.964 17:17:49 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:38.964 * Looking for test storage... 00:11:38.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:38.964 17:17:49 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:39.482 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.482 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.482 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.482 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:39.482 17:17:50 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:39.482 17:17:50 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:40.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.049 Waiting for block devices as requested 00:11:40.307 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.307 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.307 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.565 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.842 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:45.842 17:17:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:45.842 17:17:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.842 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:46.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.111 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:46.369 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:46.627 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.627 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.627 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:46.627 17:17:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=85868 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:46.939 17:17:57 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:46.939 17:17:57 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:46.939 17:17:57 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:46.939 17:17:57 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:46.939 17:17:57 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:46.939 17:17:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:46.939 Initializing NVMe Controllers 00:11:46.939 Attaching to 0000:00:10.0 00:11:46.939 Attaching to 0000:00:11.0 00:11:46.939 Attached to 0000:00:10.0 00:11:46.939 Attached to 0000:00:11.0 00:11:46.939 Initialization complete. Starting I/O... 00:11:46.939 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:46.939 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:46.939 00:11:48.311 QEMU NVMe Ctrl (12340 ): 1299 I/Os completed (+1299) 00:11:48.311 QEMU NVMe Ctrl (12341 ): 1351 I/Os completed (+1351) 00:11:48.311 00:11:49.245 QEMU NVMe Ctrl (12340 ): 2931 I/Os completed (+1632) 00:11:49.245 QEMU NVMe Ctrl (12341 ): 3021 I/Os completed (+1670) 00:11:49.245 00:11:50.180 QEMU NVMe Ctrl (12340 ): 4835 I/Os completed (+1904) 00:11:50.180 QEMU NVMe Ctrl (12341 ): 4964 I/Os completed (+1943) 00:11:50.180 00:11:51.116 QEMU NVMe Ctrl (12340 ): 6627 I/Os completed (+1792) 00:11:51.116 QEMU NVMe Ctrl (12341 ): 6830 I/Os completed (+1866) 00:11:51.116 00:11:52.049 QEMU NVMe Ctrl (12340 ): 8479 I/Os completed (+1852) 00:11:52.049 QEMU NVMe Ctrl (12341 ): 8715 I/Os completed (+1885) 00:11:52.049 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.982 [2024-07-15 17:18:03.528400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:52.982 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:52.982 [2024-07-15 17:18:03.530504] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.530578] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.530620] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.530656] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:52.982 [2024-07-15 17:18:03.532984] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.533043] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.533074] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.533096] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.982 [2024-07-15 17:18:03.557212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:52.982 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:52.982 [2024-07-15 17:18:03.558982] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.559042] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.559068] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.559093] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:52.982 [2024-07-15 17:18:03.561188] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.561254] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.561283] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 [2024-07-15 17:18:03.561308] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.982 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:52.982 EAL: Scan for (pci) bus failed. 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.982 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.982 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.982 Attaching to 0000:00:10.0 00:11:52.982 Attached to 0000:00:10.0 00:11:53.240 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.240 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.240 17:18:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:53.240 Attaching to 0000:00:11.0 00:11:53.240 Attached to 0000:00:11.0 00:11:54.173 QEMU NVMe Ctrl (12340 ): 1600 I/Os completed (+1600) 00:11:54.173 QEMU NVMe Ctrl (12341 ): 1524 I/Os completed (+1524) 00:11:54.173 00:11:55.106 QEMU NVMe Ctrl (12340 ): 3669 I/Os completed (+2069) 00:11:55.106 QEMU NVMe Ctrl (12341 ): 3734 I/Os completed (+2210) 00:11:55.106 00:11:56.039 QEMU NVMe Ctrl (12340 ): 5521 I/Os completed (+1852) 00:11:56.039 QEMU NVMe Ctrl (12341 ): 5618 I/Os completed (+1884) 00:11:56.039 00:11:56.972 QEMU NVMe Ctrl (12340 ): 7213 I/Os completed (+1692) 00:11:56.972 QEMU NVMe Ctrl (12341 ): 7443 I/Os completed (+1825) 00:11:56.972 00:11:57.906 QEMU NVMe Ctrl (12340 ): 8749 I/Os completed (+1536) 00:11:57.906 QEMU NVMe Ctrl (12341 ): 9097 I/Os completed (+1654) 00:11:57.906 00:11:59.278 QEMU NVMe Ctrl (12340 ): 10464 I/Os completed (+1715) 00:11:59.278 QEMU NVMe Ctrl (12341 ): 10838 I/Os completed (+1741) 00:11:59.278 00:12:00.209 QEMU NVMe Ctrl (12340 ): 12280 I/Os completed (+1816) 00:12:00.209 QEMU NVMe Ctrl (12341 ): 12675 I/Os completed (+1837) 00:12:00.209 00:12:01.143 QEMU NVMe Ctrl (12340 ): 14032 I/Os completed (+1752) 00:12:01.143 QEMU NVMe Ctrl (12341 ): 14577 I/Os completed (+1902) 00:12:01.143 00:12:02.099 QEMU NVMe Ctrl (12340 ): 15752 I/Os completed (+1720) 00:12:02.099 QEMU NVMe Ctrl (12341 ): 16352 I/Os completed (+1775) 00:12:02.099 00:12:03.030 QEMU NVMe Ctrl (12340 ): 17464 I/Os completed (+1712) 00:12:03.030 QEMU NVMe Ctrl (12341 ): 18164 I/Os completed (+1812) 00:12:03.030 00:12:03.961 QEMU NVMe Ctrl (12340 ): 18955 I/Os completed (+1491) 00:12:03.961 QEMU NVMe Ctrl (12341 ): 19703 I/Os completed (+1539) 00:12:03.961 00:12:04.890 QEMU NVMe Ctrl (12340 ): 20478 I/Os completed (+1523) 00:12:04.890 QEMU NVMe Ctrl (12341 ): 21322 I/Os completed (+1619) 00:12:04.890 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.147 [2024-07-15 17:18:15.870983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:05.147 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:05.147 [2024-07-15 17:18:15.873455] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.873532] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.873569] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.873618] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:05.147 [2024-07-15 17:18:15.876410] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.876471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.876505] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.876537] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.147 [2024-07-15 17:18:15.909950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:05.147 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:05.147 [2024-07-15 17:18:15.912387] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.912452] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.912485] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.912531] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:05.147 [2024-07-15 17:18:15.915203] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.915289] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.915341] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 [2024-07-15 17:18:15.915394] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:05.147 17:18:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.404 Attaching to 0000:00:10.0 00:12:05.404 Attached to 0000:00:10.0 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.404 17:18:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.404 Attaching to 0000:00:11.0 00:12:05.404 Attached to 0000:00:11.0 00:12:05.983 QEMU NVMe Ctrl (12340 ): 960 I/Os completed (+960) 00:12:05.983 QEMU NVMe Ctrl (12341 ): 858 I/Os completed (+858) 00:12:05.983 00:12:06.930 QEMU NVMe Ctrl (12340 ): 2593 I/Os completed (+1633) 00:12:06.930 QEMU NVMe Ctrl (12341 ): 2605 I/Os completed (+1747) 00:12:06.930 00:12:08.305 QEMU NVMe Ctrl (12340 ): 4249 I/Os completed (+1656) 00:12:08.305 QEMU NVMe Ctrl (12341 ): 4400 I/Os completed (+1795) 00:12:08.305 00:12:09.238 QEMU NVMe Ctrl (12340 ): 5962 I/Os completed (+1713) 00:12:09.238 QEMU NVMe Ctrl (12341 ): 6168 I/Os completed (+1768) 00:12:09.238 00:12:10.173 QEMU NVMe Ctrl (12340 ): 7637 I/Os completed (+1675) 00:12:10.173 QEMU NVMe Ctrl (12341 ): 7916 I/Os completed (+1748) 00:12:10.173 00:12:11.105 QEMU NVMe Ctrl (12340 ): 9533 I/Os completed (+1896) 00:12:11.105 QEMU NVMe Ctrl (12341 ): 9815 I/Os completed (+1899) 00:12:11.105 00:12:12.080 QEMU NVMe Ctrl (12340 ): 11173 I/Os completed (+1640) 00:12:12.080 QEMU NVMe Ctrl (12341 ): 11501 I/Os completed (+1686) 00:12:12.080 00:12:13.012 QEMU NVMe Ctrl (12340 ): 12961 I/Os completed (+1788) 00:12:13.012 QEMU NVMe Ctrl (12341 ): 13310 I/Os completed (+1809) 00:12:13.012 00:12:13.945 QEMU NVMe Ctrl (12340 ): 14677 I/Os completed (+1716) 00:12:13.945 QEMU NVMe Ctrl (12341 ): 15080 I/Os completed (+1770) 00:12:13.945 00:12:15.320 QEMU NVMe Ctrl (12340 ): 16453 I/Os completed (+1776) 00:12:15.320 QEMU NVMe Ctrl (12341 ): 16941 I/Os completed (+1861) 00:12:15.320 00:12:15.885 QEMU NVMe Ctrl (12340 ): 18041 I/Os completed (+1588) 00:12:15.885 QEMU NVMe Ctrl (12341 ): 18779 I/Os completed (+1838) 00:12:15.885 00:12:17.260 QEMU NVMe Ctrl (12340 ): 19774 I/Os completed (+1733) 00:12:17.260 QEMU NVMe Ctrl (12341 ): 20622 I/Os completed (+1843) 00:12:17.260 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.517 [2024-07-15 17:18:28.214753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:17.517 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:17.517 [2024-07-15 17:18:28.216533] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.216643] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.216688] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.216711] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:17.517 [2024-07-15 17:18:28.219018] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.219095] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.219129] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.219150] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.517 [2024-07-15 17:18:28.241854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:17.517 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:17.517 [2024-07-15 17:18:28.243405] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.243459] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.243484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.243508] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:17.517 [2024-07-15 17:18:28.245267] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.245318] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.245342] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 [2024-07-15 17:18:28.245382] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.517 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:17.775 Attaching to 0000:00:10.0 00:12:17.775 Attached to 0000:00:10.0 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.775 17:18:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.775 Attaching to 0000:00:11.0 00:12:17.775 Attached to 0000:00:11.0 00:12:17.775 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:17.775 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:17.775 [2024-07-15 17:18:28.572404] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:29.967 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:29.967 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:29.967 17:18:40 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.04 00:12:29.967 17:18:40 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.04 00:12:29.967 17:18:40 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:29.967 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.04 00:12:29.967 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.04 2 00:12:29.967 remove_attach_helper took 43.04s to complete (handling 2 nvme drive(s)) 17:18:40 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 85868 00:12:36.522 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (85868) - No such process 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 85868 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=86407 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:36.522 17:18:46 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 86407 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 86407 ']' 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.522 17:18:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.522 [2024-07-15 17:18:46.697514] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:12:36.522 [2024-07-15 17:18:46.697747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86407 ] 00:12:36.522 [2024-07-15 17:18:46.852196] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.522 [2024-07-15 17:18:46.871972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.522 [2024-07-15 17:18:46.962720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:36.780 17:18:47 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:36.780 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:36.781 17:18:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.340 17:18:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.340 17:18:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.340 17:18:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.340 [2024-07-15 17:18:53.691030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:43.340 17:18:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:43.340 [2024-07-15 17:18:53.695031] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:53.695106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:53.695155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:53.695185] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:53.695222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:53.695240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:53.695265] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:53.695283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:53.695302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:53.695319] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:53.695337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:53.695354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:54.091039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:43.340 [2024-07-15 17:18:54.094543] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:54.094608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:54.094637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:54.094682] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:54.094701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.340 [2024-07-15 17:18:54.094739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.340 [2024-07-15 17:18:54.094758] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.340 [2024-07-15 17:18:54.094793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.341 [2024-07-15 17:18:54.094825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.341 [2024-07-15 17:18:54.094850] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.341 [2024-07-15 17:18:54.094867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.341 [2024-07-15 17:18:54.094887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.616 17:18:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.616 17:18:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.616 17:18:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:43.616 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.895 17:18:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.141 17:19:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.141 [2024-07-15 17:19:06.691314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:56.141 [2024-07-15 17:19:06.694807] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.141 [2024-07-15 17:19:06.694865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.141 [2024-07-15 17:19:06.694897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.141 [2024-07-15 17:19:06.694926] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.141 [2024-07-15 17:19:06.694951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.141 [2024-07-15 17:19:06.694969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.141 [2024-07-15 17:19:06.694992] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.141 [2024-07-15 17:19:06.695009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.141 [2024-07-15 17:19:06.695028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.141 [2024-07-15 17:19:06.695046] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.141 [2024-07-15 17:19:06.695066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.141 [2024-07-15 17:19:06.695083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:56.141 17:19:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:56.398 [2024-07-15 17:19:07.091356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:56.398 [2024-07-15 17:19:07.094967] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.398 [2024-07-15 17:19:07.095054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.398 [2024-07-15 17:19:07.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.398 [2024-07-15 17:19:07.095116] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.398 [2024-07-15 17:19:07.095136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.398 [2024-07-15 17:19:07.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.398 [2024-07-15 17:19:07.095191] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.398 [2024-07-15 17:19:07.095227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.398 [2024-07-15 17:19:07.095244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.398 [2024-07-15 17:19:07.095264] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.398 [2024-07-15 17:19:07.095297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.398 [2024-07-15 17:19:07.095317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.398 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:56.398 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.399 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.399 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.399 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.399 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.399 17:19:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.399 17:19:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.399 17:19:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.656 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:56.913 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:56.913 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.913 17:19:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.109 [2024-07-15 17:19:19.691752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:09.109 [2024-07-15 17:19:19.694860] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.109 [2024-07-15 17:19:19.694915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.109 [2024-07-15 17:19:19.694947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.109 [2024-07-15 17:19:19.694975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.109 [2024-07-15 17:19:19.694995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.109 [2024-07-15 17:19:19.695010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.109 [2024-07-15 17:19:19.695029] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.109 [2024-07-15 17:19:19.695043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.109 [2024-07-15 17:19:19.695060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.109 [2024-07-15 17:19:19.695076] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.109 [2024-07-15 17:19:19.695093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.109 [2024-07-15 17:19:19.695107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.109 17:19:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:09.109 17:19:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:09.367 [2024-07-15 17:19:20.191783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:09.367 [2024-07-15 17:19:20.194964] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.367 [2024-07-15 17:19:20.195026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.367 [2024-07-15 17:19:20.195052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.367 [2024-07-15 17:19:20.195082] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.367 [2024-07-15 17:19:20.195101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.367 [2024-07-15 17:19:20.195122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.367 [2024-07-15 17:19:20.195139] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.367 [2024-07-15 17:19:20.195157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.367 [2024-07-15 17:19:20.195172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.367 [2024-07-15 17:19:20.195191] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.367 [2024-07-15 17:19:20.195206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.367 [2024-07-15 17:19:20.195224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.624 17:19:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.624 17:19:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.624 17:19:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.624 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.883 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.883 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.883 17:19:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.01 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.01 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.01 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.01 2 00:13:22.085 remove_attach_helper took 45.01s to complete (handling 2 nvme drive(s)) 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:22.085 17:19:32 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:22.085 17:19:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.640 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.640 17:19:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.640 17:19:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.640 17:19:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.640 [2024-07-15 17:19:38.727782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:28.640 [2024-07-15 17:19:38.730031] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.640 [2024-07-15 17:19:38.730119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.640 [2024-07-15 17:19:38.730152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.640 [2024-07-15 17:19:38.730181] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.640 [2024-07-15 17:19:38.730209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.640 [2024-07-15 17:19:38.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.640 [2024-07-15 17:19:38.730250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:38.730267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:38.730296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 [2024-07-15 17:19:38.730313] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:38.730343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:38.730410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:28.641 17:19:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:28.641 [2024-07-15 17:19:39.127799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:28.641 [2024-07-15 17:19:39.130024] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:39.130116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:39.130145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 [2024-07-15 17:19:39.130178] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:39.130197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:39.130217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 [2024-07-15 17:19:39.130235] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:39.130255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:39.130271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 [2024-07-15 17:19:39.130294] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.641 [2024-07-15 17:19:39.130310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.641 [2024-07-15 17:19:39.130350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.641 17:19:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.641 17:19:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.641 17:19:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:28.641 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:28.899 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:28.899 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:28.899 17:19:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.110 [2024-07-15 17:19:51.628160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:41.110 [2024-07-15 17:19:51.630882] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.110 [2024-07-15 17:19:51.630937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.110 [2024-07-15 17:19:51.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.110 [2024-07-15 17:19:51.631001] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.110 [2024-07-15 17:19:51.631023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.110 [2024-07-15 17:19:51.631040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.110 [2024-07-15 17:19:51.631062] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.110 [2024-07-15 17:19:51.631080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.110 [2024-07-15 17:19:51.631100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.110 [2024-07-15 17:19:51.631117] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.110 [2024-07-15 17:19:51.631138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.110 [2024-07-15 17:19:51.631156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.110 17:19:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:41.110 17:19:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:41.368 [2024-07-15 17:19:52.128152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:41.368 [2024-07-15 17:19:52.130469] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.368 [2024-07-15 17:19:52.130538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.368 [2024-07-15 17:19:52.130568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.368 [2024-07-15 17:19:52.130601] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.368 [2024-07-15 17:19:52.130620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.368 [2024-07-15 17:19:52.130641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.368 [2024-07-15 17:19:52.130659] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.368 [2024-07-15 17:19:52.130680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.368 [2024-07-15 17:19:52.130696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.368 [2024-07-15 17:19:52.130718] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:41.368 [2024-07-15 17:19:52.130737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.368 [2024-07-15 17:19:52.130757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.626 17:19:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.626 17:19:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.626 17:19:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:41.626 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:41.884 17:19:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.123 17:20:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.123 [2024-07-15 17:20:04.728442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:54.123 [2024-07-15 17:20:04.730703] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.123 [2024-07-15 17:20:04.730761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.123 [2024-07-15 17:20:04.730793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.123 [2024-07-15 17:20:04.730822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.123 [2024-07-15 17:20:04.730852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.123 [2024-07-15 17:20:04.730870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.123 [2024-07-15 17:20:04.730892] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.123 [2024-07-15 17:20:04.730908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.123 [2024-07-15 17:20:04.730928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.123 [2024-07-15 17:20:04.730945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.123 [2024-07-15 17:20:04.730965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.123 [2024-07-15 17:20:04.730981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:54.123 17:20:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:54.381 [2024-07-15 17:20:05.128501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:54.381 [2024-07-15 17:20:05.130831] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.381 [2024-07-15 17:20:05.130905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.381 [2024-07-15 17:20:05.130933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.381 [2024-07-15 17:20:05.130966] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.381 [2024-07-15 17:20:05.130986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.381 [2024-07-15 17:20:05.131007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.381 [2024-07-15 17:20:05.131026] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.381 [2024-07-15 17:20:05.131050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.381 [2024-07-15 17:20:05.131067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.381 [2024-07-15 17:20:05.131092] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.381 [2024-07-15 17:20:05.131108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.381 [2024-07-15 17:20:05.131128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.638 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.638 17:20:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.638 17:20:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.638 17:20:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.639 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.896 17:20:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.02 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.02 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 00:14:07.160 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:07.160 17:20:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 86407 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 86407 ']' 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 86407 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86407 00:14:07.160 killing process with pid 86407 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86407' 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@967 -- # kill 86407 00:14:07.160 17:20:17 sw_hotplug -- common/autotest_common.sh@972 -- # wait 86407 00:14:07.432 17:20:18 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:07.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.254 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.254 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:08.512 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.512 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.512 00:14:08.512 real 2m29.654s 00:14:08.512 user 1m48.524s 00:14:08.512 sys 0m20.631s 00:14:08.512 ************************************ 00:14:08.512 END TEST sw_hotplug 00:14:08.512 ************************************ 00:14:08.512 17:20:19 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.512 17:20:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:08.512 17:20:19 -- common/autotest_common.sh@1142 -- # return 0 00:14:08.512 17:20:19 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:14:08.512 17:20:19 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:08.512 17:20:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:08.512 17:20:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.512 17:20:19 -- common/autotest_common.sh@10 -- # set +x 00:14:08.512 ************************************ 00:14:08.512 START TEST nvme_xnvme 00:14:08.512 ************************************ 00:14:08.512 17:20:19 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:08.512 * Looking for test storage... 00:14:08.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:08.770 17:20:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.770 17:20:19 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.770 17:20:19 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.770 17:20:19 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.770 17:20:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.770 17:20:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.770 17:20:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.770 17:20:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:08.770 17:20:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.770 17:20:19 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:08.770 17:20:19 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:08.770 17:20:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.770 17:20:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.770 ************************************ 00:14:08.770 START TEST xnvme_to_malloc_dd_copy 00:14:08.770 ************************************ 00:14:08.770 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:14:08.770 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:08.770 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:08.770 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:08.771 17:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:08.771 { 00:14:08.771 "subsystems": [ 00:14:08.771 { 00:14:08.771 "subsystem": "bdev", 00:14:08.771 "config": [ 00:14:08.771 { 00:14:08.771 "params": { 00:14:08.771 "block_size": 512, 00:14:08.771 "num_blocks": 2097152, 00:14:08.771 "name": "malloc0" 00:14:08.771 }, 00:14:08.771 "method": "bdev_malloc_create" 00:14:08.771 }, 00:14:08.771 { 00:14:08.771 "params": { 00:14:08.771 "io_mechanism": "libaio", 00:14:08.771 "filename": "/dev/nullb0", 00:14:08.771 "name": "null0" 00:14:08.771 }, 00:14:08.771 "method": "bdev_xnvme_create" 00:14:08.771 }, 00:14:08.771 { 00:14:08.771 "method": "bdev_wait_for_examine" 00:14:08.771 } 00:14:08.771 ] 00:14:08.771 } 00:14:08.771 ] 00:14:08.771 } 00:14:08.771 [2024-07-15 17:20:19.508264] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:08.771 [2024-07-15 17:20:19.508719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87738 ] 00:14:09.029 [2024-07-15 17:20:19.664038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.029 [2024-07-15 17:20:19.687993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.029 [2024-07-15 17:20:19.797314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.007  Copying: 152/1024 [MB] (152 MBps) Copying: 308/1024 [MB] (156 MBps) Copying: 466/1024 [MB] (157 MBps) Copying: 621/1024 [MB] (154 MBps) Copying: 773/1024 [MB] (152 MBps) Copying: 924/1024 [MB] (151 MBps) Copying: 1024/1024 [MB] (average 154 MBps) 00:14:17.007 00:14:17.007 17:20:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:17.007 17:20:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:17.007 17:20:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:17.007 17:20:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:17.007 { 00:14:17.007 "subsystems": [ 00:14:17.007 { 00:14:17.007 "subsystem": "bdev", 00:14:17.007 "config": [ 00:14:17.007 { 00:14:17.007 "params": { 00:14:17.007 "block_size": 512, 00:14:17.007 "num_blocks": 2097152, 00:14:17.007 "name": "malloc0" 00:14:17.007 }, 00:14:17.007 "method": "bdev_malloc_create" 00:14:17.007 }, 00:14:17.007 { 00:14:17.007 "params": { 00:14:17.007 "io_mechanism": "libaio", 00:14:17.007 "filename": "/dev/nullb0", 00:14:17.007 "name": "null0" 00:14:17.007 }, 00:14:17.007 "method": "bdev_xnvme_create" 00:14:17.007 }, 00:14:17.007 { 00:14:17.007 "method": "bdev_wait_for_examine" 00:14:17.007 } 00:14:17.007 ] 00:14:17.007 } 00:14:17.007 ] 00:14:17.007 } 00:14:17.007 [2024-07-15 17:20:27.709296] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:17.007 [2024-07-15 17:20:27.709535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87831 ] 00:14:17.266 [2024-07-15 17:20:27.865972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:17.266 [2024-07-15 17:20:27.888252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.266 [2024-07-15 17:20:27.975284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.627  Copying: 158/1024 [MB] (158 MBps) Copying: 322/1024 [MB] (164 MBps) Copying: 489/1024 [MB] (166 MBps) Copying: 652/1024 [MB] (162 MBps) Copying: 814/1024 [MB] (162 MBps) Copying: 977/1024 [MB] (162 MBps) Copying: 1024/1024 [MB] (average 162 MBps) 00:14:24.627 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:24.627 17:20:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:24.627 { 00:14:24.627 "subsystems": [ 00:14:24.627 { 00:14:24.627 "subsystem": "bdev", 00:14:24.627 "config": [ 00:14:24.627 { 00:14:24.627 "params": { 00:14:24.627 "block_size": 512, 00:14:24.627 "num_blocks": 2097152, 00:14:24.627 "name": "malloc0" 00:14:24.627 }, 00:14:24.627 "method": "bdev_malloc_create" 00:14:24.627 }, 00:14:24.627 { 00:14:24.627 "params": { 00:14:24.627 "io_mechanism": "io_uring", 00:14:24.627 "filename": "/dev/nullb0", 00:14:24.627 "name": "null0" 00:14:24.627 }, 00:14:24.627 "method": "bdev_xnvme_create" 00:14:24.627 }, 00:14:24.627 { 00:14:24.627 "method": "bdev_wait_for_examine" 00:14:24.627 } 00:14:24.627 ] 00:14:24.627 } 00:14:24.627 ] 00:14:24.627 } 00:14:24.912 [2024-07-15 17:20:35.499795] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:24.912 [2024-07-15 17:20:35.500001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87924 ] 00:14:24.912 [2024-07-15 17:20:35.652943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.912 [2024-07-15 17:20:35.673915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.169 [2024-07-15 17:20:35.775142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.339  Copying: 164/1024 [MB] (164 MBps) Copying: 331/1024 [MB] (167 MBps) Copying: 490/1024 [MB] (159 MBps) Copying: 644/1024 [MB] (154 MBps) Copying: 805/1024 [MB] (161 MBps) Copying: 967/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 161 MBps) 00:14:32.339 00:14:32.623 17:20:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:32.623 17:20:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:32.623 17:20:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:32.623 17:20:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:32.623 { 00:14:32.623 "subsystems": [ 00:14:32.623 { 00:14:32.623 "subsystem": "bdev", 00:14:32.623 "config": [ 00:14:32.623 { 00:14:32.623 "params": { 00:14:32.623 "block_size": 512, 00:14:32.623 "num_blocks": 2097152, 00:14:32.623 "name": "malloc0" 00:14:32.623 }, 00:14:32.623 "method": "bdev_malloc_create" 00:14:32.623 }, 00:14:32.623 { 00:14:32.623 "params": { 00:14:32.623 "io_mechanism": "io_uring", 00:14:32.623 "filename": "/dev/nullb0", 00:14:32.623 "name": "null0" 00:14:32.623 }, 00:14:32.623 "method": "bdev_xnvme_create" 00:14:32.623 }, 00:14:32.623 { 00:14:32.623 "method": "bdev_wait_for_examine" 00:14:32.623 } 00:14:32.623 ] 00:14:32.623 } 00:14:32.623 ] 00:14:32.623 } 00:14:32.623 [2024-07-15 17:20:43.292762] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:32.623 [2024-07-15 17:20:43.292929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88010 ] 00:14:32.623 [2024-07-15 17:20:43.437624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:32.623 [2024-07-15 17:20:43.457029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.881 [2024-07-15 17:20:43.558356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.249  Copying: 161/1024 [MB] (161 MBps) Copying: 320/1024 [MB] (159 MBps) Copying: 481/1024 [MB] (160 MBps) Copying: 642/1024 [MB] (161 MBps) Copying: 806/1024 [MB] (163 MBps) Copying: 967/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 161 MBps) 00:14:40.249 00:14:40.249 17:20:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:40.249 17:20:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:40.249 00:14:40.249 real 0m31.677s 00:14:40.249 user 0m25.344s 00:14:40.249 sys 0m5.784s 00:14:40.249 17:20:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.249 17:20:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:40.249 ************************************ 00:14:40.249 END TEST xnvme_to_malloc_dd_copy 00:14:40.249 ************************************ 00:14:40.507 17:20:51 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:40.507 17:20:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:40.507 17:20:51 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.507 17:20:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.507 17:20:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.507 ************************************ 00:14:40.507 START TEST xnvme_bdevperf 00:14:40.507 ************************************ 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:40.507 17:20:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.507 { 00:14:40.507 "subsystems": [ 00:14:40.507 { 00:14:40.507 "subsystem": "bdev", 00:14:40.507 "config": [ 00:14:40.507 { 00:14:40.507 "params": { 00:14:40.507 "io_mechanism": "libaio", 00:14:40.507 "filename": "/dev/nullb0", 00:14:40.507 "name": "null0" 00:14:40.507 }, 00:14:40.507 "method": "bdev_xnvme_create" 00:14:40.507 }, 00:14:40.507 { 00:14:40.507 "method": "bdev_wait_for_examine" 00:14:40.507 } 00:14:40.507 ] 00:14:40.507 } 00:14:40.507 ] 00:14:40.507 } 00:14:40.507 [2024-07-15 17:20:51.240808] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:40.507 [2024-07-15 17:20:51.241013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88126 ] 00:14:40.765 [2024-07-15 17:20:51.395137] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:40.765 [2024-07-15 17:20:51.410494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.765 [2024-07-15 17:20:51.505762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.022 Running I/O for 5 seconds... 00:14:46.283 00:14:46.283 Latency(us) 00:14:46.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.283 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:46.283 null0 : 5.00 115557.84 451.40 0.00 0.00 550.38 188.97 1176.67 00:14:46.283 =================================================================================================================== 00:14:46.283 Total : 115557.84 451.40 0.00 0.00 550.38 188.97 1176.67 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.283 17:20:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.283 { 00:14:46.283 "subsystems": [ 00:14:46.283 { 00:14:46.283 "subsystem": "bdev", 00:14:46.283 "config": [ 00:14:46.283 { 00:14:46.283 "params": { 00:14:46.283 "io_mechanism": "io_uring", 00:14:46.283 "filename": "/dev/nullb0", 00:14:46.283 "name": "null0" 00:14:46.283 }, 00:14:46.283 "method": "bdev_xnvme_create" 00:14:46.283 }, 00:14:46.283 { 00:14:46.283 "method": "bdev_wait_for_examine" 00:14:46.283 } 00:14:46.283 ] 00:14:46.283 } 00:14:46.283 ] 00:14:46.283 } 00:14:46.283 [2024-07-15 17:20:57.015092] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:46.283 [2024-07-15 17:20:57.015257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88204 ] 00:14:46.541 [2024-07-15 17:20:57.167939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:46.541 [2024-07-15 17:20:57.189694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.541 [2024-07-15 17:20:57.275369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.541 Running I/O for 5 seconds... 00:14:51.802 00:14:51.802 Latency(us) 00:14:51.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.802 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:51.802 null0 : 5.00 156354.87 610.76 0.00 0.00 406.07 256.93 875.05 00:14:51.802 =================================================================================================================== 00:14:51.802 Total : 156354.87 610.76 0.00 0.00 406.07 256.93 875.05 00:14:51.802 17:21:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:51.802 17:21:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:52.060 00:14:52.060 real 0m11.573s 00:14:52.060 user 0m8.494s 00:14:52.060 sys 0m2.841s 00:14:52.060 17:21:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.060 ************************************ 00:14:52.060 END TEST xnvme_bdevperf 00:14:52.060 ************************************ 00:14:52.060 17:21:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.060 17:21:02 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:52.060 00:14:52.060 real 0m43.444s 00:14:52.060 user 0m33.908s 00:14:52.060 sys 0m8.744s 00:14:52.060 17:21:02 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.060 17:21:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.060 ************************************ 00:14:52.060 END TEST nvme_xnvme 00:14:52.060 ************************************ 00:14:52.060 17:21:02 -- common/autotest_common.sh@1142 -- # return 0 00:14:52.060 17:21:02 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:52.060 17:21:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.060 17:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.060 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:14:52.060 ************************************ 00:14:52.060 START TEST blockdev_xnvme 00:14:52.060 ************************************ 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:52.060 * Looking for test storage... 00:14:52.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=88334 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 88334 00:14:52.060 17:21:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 88334 ']' 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.060 17:21:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.332 [2024-07-15 17:21:02.999862] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:52.332 [2024-07-15 17:21:03.000058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88334 ] 00:14:52.332 [2024-07-15 17:21:03.152948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:52.332 [2024-07-15 17:21:03.169328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.588 [2024-07-15 17:21:03.262620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.152 17:21:03 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.152 17:21:03 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:14:53.152 17:21:03 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:14:53.152 17:21:03 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:14:53.152 17:21:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:53.152 17:21:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:53.152 17:21:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:53.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:53.666 Waiting for block devices as requested 00:14:53.666 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:53.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:53.924 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:53.924 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:59.186 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:59.186 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:59.186 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:59.187 nvme0n1 00:14:59.187 nvme1n1 00:14:59.187 nvme2n1 00:14:59.187 nvme2n2 00:14:59.187 nvme2n3 00:14:59.187 nvme3n1 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.187 17:21:09 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:59.187 17:21:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c110de40-1eb8-4ee9-a8ac-de69a7f8a99b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c110de40-1eb8-4ee9-a8ac-de69a7f8a99b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c1e2954a-a663-4370-b426-93020f2ebe28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c1e2954a-a663-4370-b426-93020f2ebe28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "275e5626-084d-4279-ae17-6fabd5e4506b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "275e5626-084d-4279-ae17-6fabd5e4506b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "dbb87424-21a9-4ad4-8e2b-9cfe6515ba60"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbb87424-21a9-4ad4-8e2b-9cfe6515ba60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "89238392-97da-4e1c-8b2a-b9327f0703f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89238392-97da-4e1c-8b2a-b9327f0703f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1462754d-e30b-477c-b44a-e827e35de469"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1462754d-e30b-477c-b44a-e827e35de469",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:59.187 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:59.187 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:59.187 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:59.187 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 88334 00:14:59.187 17:21:10 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 88334 ']' 00:14:59.187 17:21:10 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 88334 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88334 00:14:59.446 killing process with pid 88334 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88334' 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 88334 00:14:59.446 17:21:10 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 88334 00:14:59.704 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:59.704 17:21:10 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:59.704 17:21:10 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:14:59.704 17:21:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.704 17:21:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.704 ************************************ 00:14:59.704 START TEST bdev_hello_world 00:14:59.704 ************************************ 00:14:59.704 17:21:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:59.963 [2024-07-15 17:21:10.616918] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:59.963 [2024-07-15 17:21:10.617108] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88675 ] 00:14:59.963 [2024-07-15 17:21:10.769298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.963 [2024-07-15 17:21:10.789238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.222 [2024-07-15 17:21:10.888404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.480 [2024-07-15 17:21:11.092887] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:00.480 [2024-07-15 17:21:11.092954] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:00.480 [2024-07-15 17:21:11.092983] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:00.480 [2024-07-15 17:21:11.095466] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:00.480 [2024-07-15 17:21:11.095867] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:00.480 [2024-07-15 17:21:11.095900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:00.480 [2024-07-15 17:21:11.096065] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:00.480 00:15:00.480 [2024-07-15 17:21:11.096096] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:00.738 00:15:00.738 real 0m0.832s 00:15:00.738 user 0m0.476s 00:15:00.738 sys 0m0.246s 00:15:00.738 ************************************ 00:15:00.738 END TEST bdev_hello_world 00:15:00.738 ************************************ 00:15:00.738 17:21:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.738 17:21:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 17:21:11 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:00.738 17:21:11 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:15:00.738 17:21:11 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.738 17:21:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.738 17:21:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 ************************************ 00:15:00.738 START TEST bdev_bounds 00:15:00.738 ************************************ 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=88706 00:15:00.738 Process bdevio pid: 88706 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 88706' 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 88706 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 88706 ']' 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.738 17:21:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 [2024-07-15 17:21:11.496474] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:00.738 [2024-07-15 17:21:11.496689] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88706 ] 00:15:00.996 [2024-07-15 17:21:11.647919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.996 [2024-07-15 17:21:11.665930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.996 [2024-07-15 17:21:11.760724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.996 [2024-07-15 17:21:11.760752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.996 [2024-07-15 17:21:11.760752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.561 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.561 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:15:01.561 17:21:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:01.819 I/O targets: 00:15:01.819 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:01.819 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:01.819 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.819 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.819 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:01.819 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:01.819 00:15:01.819 00:15:01.819 CUnit - A unit testing framework for C - Version 2.1-3 00:15:01.819 http://cunit.sourceforge.net/ 00:15:01.819 00:15:01.819 00:15:01.819 Suite: bdevio tests on: nvme3n1 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 Suite: bdevio tests on: nvme2n3 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 Suite: bdevio tests on: nvme2n2 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 Suite: bdevio tests on: nvme2n1 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 Suite: bdevio tests on: nvme1n1 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 Suite: bdevio tests on: nvme0n1 00:15:01.819 Test: blockdev write read block ...passed 00:15:01.819 Test: blockdev write zeroes read block ...passed 00:15:01.819 Test: blockdev write zeroes read no split ...passed 00:15:01.819 Test: blockdev write zeroes read split ...passed 00:15:01.819 Test: blockdev write zeroes read split partial ...passed 00:15:01.819 Test: blockdev reset ...passed 00:15:01.819 Test: blockdev write read 8 blocks ...passed 00:15:01.819 Test: blockdev write read size > 128k ...passed 00:15:01.819 Test: blockdev write read invalid size ...passed 00:15:01.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:01.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:01.819 Test: blockdev write read max offset ...passed 00:15:01.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:01.819 Test: blockdev writev readv 8 blocks ...passed 00:15:01.819 Test: blockdev writev readv 30 x 1block ...passed 00:15:01.819 Test: blockdev writev readv block ...passed 00:15:01.819 Test: blockdev writev readv size > 128k ...passed 00:15:01.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:01.819 Test: blockdev comparev and writev ...passed 00:15:01.819 Test: blockdev nvme passthru rw ...passed 00:15:01.819 Test: blockdev nvme passthru vendor specific ...passed 00:15:01.819 Test: blockdev nvme admin passthru ...passed 00:15:01.819 Test: blockdev copy ...passed 00:15:01.819 00:15:01.819 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.819 suites 6 6 n/a 0 0 00:15:01.819 tests 138 138 138 0 0 00:15:01.819 asserts 780 780 780 0 n/a 00:15:01.819 00:15:01.819 Elapsed time = 0.284 seconds 00:15:01.819 0 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 88706 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 88706 ']' 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 88706 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.820 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88706 00:15:02.077 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.077 killing process with pid 88706 00:15:02.077 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.077 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88706' 00:15:02.077 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 88706 00:15:02.077 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 88706 00:15:02.335 17:21:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:15:02.335 00:15:02.335 real 0m1.543s 00:15:02.335 user 0m3.669s 00:15:02.335 sys 0m0.365s 00:15:02.335 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.335 ************************************ 00:15:02.335 END TEST bdev_bounds 00:15:02.335 ************************************ 00:15:02.335 17:21:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:02.335 17:21:12 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:02.335 17:21:12 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:02.335 17:21:12 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:02.335 17:21:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.335 17:21:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:02.335 ************************************ 00:15:02.335 START TEST bdev_nbd 00:15:02.335 ************************************ 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=88762 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 88762 /var/tmp/spdk-nbd.sock 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 88762 ']' 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:02.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.335 17:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:02.335 [2024-07-15 17:21:13.076170] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:02.335 [2024-07-15 17:21:13.076509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.597 [2024-07-15 17:21:13.220206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:02.597 [2024-07-15 17:21:13.235102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.597 [2024-07-15 17:21:13.327140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.163 17:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.728 1+0 records in 00:15:03.728 1+0 records out 00:15:03.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477206 s, 8.6 MB/s 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:03.728 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.729 1+0 records in 00:15:03.729 1+0 records out 00:15:03.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881047 s, 4.6 MB/s 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.729 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.987 1+0 records in 00:15:03.987 1+0 records out 00:15:03.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486506 s, 8.4 MB/s 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.987 17:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.552 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.553 1+0 records in 00:15:04.553 1+0 records out 00:15:04.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832285 s, 4.9 MB/s 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.553 1+0 records in 00:15:04.553 1+0 records out 00:15:04.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940762 s, 4.4 MB/s 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:04.553 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.117 1+0 records in 00:15:05.117 1+0 records out 00:15:05.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946672 s, 4.3 MB/s 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:05.117 17:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd0", 00:15:05.375 "bdev_name": "nvme0n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd1", 00:15:05.375 "bdev_name": "nvme1n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd2", 00:15:05.375 "bdev_name": "nvme2n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd3", 00:15:05.375 "bdev_name": "nvme2n2" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd4", 00:15:05.375 "bdev_name": "nvme2n3" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd5", 00:15:05.375 "bdev_name": "nvme3n1" 00:15:05.375 } 00:15:05.375 ]' 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd0", 00:15:05.375 "bdev_name": "nvme0n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd1", 00:15:05.375 "bdev_name": "nvme1n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd2", 00:15:05.375 "bdev_name": "nvme2n1" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd3", 00:15:05.375 "bdev_name": "nvme2n2" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd4", 00:15:05.375 "bdev_name": "nvme2n3" 00:15:05.375 }, 00:15:05.375 { 00:15:05.375 "nbd_device": "/dev/nbd5", 00:15:05.375 "bdev_name": "nvme3n1" 00:15:05.375 } 00:15:05.375 ]' 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.375 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.633 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.890 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.147 17:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.413 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:06.685 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:06.685 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.686 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.943 17:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:07.201 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:07.201 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:07.201 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.459 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:07.717 /dev/nbd0 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.717 1+0 records in 00:15:07.717 1+0 records out 00:15:07.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367733 s, 11.1 MB/s 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.717 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:07.975 /dev/nbd1 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.975 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.975 1+0 records in 00:15:07.976 1+0 records out 00:15:07.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651429 s, 6.3 MB/s 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.976 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:08.234 /dev/nbd10 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:08.234 17:21:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.234 1+0 records in 00:15:08.234 1+0 records out 00:15:08.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748944 s, 5.5 MB/s 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:08.234 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:08.492 /dev/nbd11 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.492 1+0 records in 00:15:08.492 1+0 records out 00:15:08.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537356 s, 7.6 MB/s 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:08.492 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:08.750 /dev/nbd12 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.750 1+0 records in 00:15:08.750 1+0 records out 00:15:08.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637025 s, 6.4 MB/s 00:15:08.750 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:08.751 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:09.009 /dev/nbd13 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.009 1+0 records in 00:15:09.009 1+0 records out 00:15:09.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193286 s, 2.1 MB/s 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.009 17:21:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd0", 00:15:09.577 "bdev_name": "nvme0n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd1", 00:15:09.577 "bdev_name": "nvme1n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd10", 00:15:09.577 "bdev_name": "nvme2n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd11", 00:15:09.577 "bdev_name": "nvme2n2" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd12", 00:15:09.577 "bdev_name": "nvme2n3" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd13", 00:15:09.577 "bdev_name": "nvme3n1" 00:15:09.577 } 00:15:09.577 ]' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd0", 00:15:09.577 "bdev_name": "nvme0n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd1", 00:15:09.577 "bdev_name": "nvme1n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd10", 00:15:09.577 "bdev_name": "nvme2n1" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd11", 00:15:09.577 "bdev_name": "nvme2n2" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd12", 00:15:09.577 "bdev_name": "nvme2n3" 00:15:09.577 }, 00:15:09.577 { 00:15:09.577 "nbd_device": "/dev/nbd13", 00:15:09.577 "bdev_name": "nvme3n1" 00:15:09.577 } 00:15:09.577 ]' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:09.577 /dev/nbd1 00:15:09.577 /dev/nbd10 00:15:09.577 /dev/nbd11 00:15:09.577 /dev/nbd12 00:15:09.577 /dev/nbd13' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:09.577 /dev/nbd1 00:15:09.577 /dev/nbd10 00:15:09.577 /dev/nbd11 00:15:09.577 /dev/nbd12 00:15:09.577 /dev/nbd13' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:09.577 256+0 records in 00:15:09.577 256+0 records out 00:15:09.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105674 s, 99.2 MB/s 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:09.577 256+0 records in 00:15:09.577 256+0 records out 00:15:09.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159052 s, 6.6 MB/s 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.577 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:09.835 256+0 records in 00:15:09.835 256+0 records out 00:15:09.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151313 s, 6.9 MB/s 00:15:09.835 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.835 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:09.835 256+0 records in 00:15:09.835 256+0 records out 00:15:09.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143075 s, 7.3 MB/s 00:15:09.835 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:09.835 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:10.095 256+0 records in 00:15:10.095 256+0 records out 00:15:10.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150464 s, 7.0 MB/s 00:15:10.095 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:10.095 17:21:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:10.353 256+0 records in 00:15:10.353 256+0 records out 00:15:10.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166471 s, 6.3 MB/s 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:10.353 256+0 records in 00:15:10.353 256+0 records out 00:15:10.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151287 s, 6.9 MB/s 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:10.353 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.612 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.178 17:21:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.178 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.765 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.331 17:21:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:12.331 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:12.595 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:12.853 malloc_lvol_verify 00:15:12.853 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:12.853 592dbb0d-4cc9-4f41-ba90-12dd6edef848 00:15:13.111 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:13.111 150d307d-7879-4a82-8b28-a898e465d7e6 00:15:13.111 17:21:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:13.369 /dev/nbd0 00:15:13.369 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:13.626 mke2fs 1.46.5 (30-Dec-2021) 00:15:13.626 Discarding device blocks: 0/4096 done 00:15:13.626 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:13.626 00:15:13.626 Allocating group tables: 0/1 done 00:15:13.626 Writing inode tables: 0/1 done 00:15:13.626 Creating journal (1024 blocks): done 00:15:13.626 Writing superblocks and filesystem accounting information: 0/1 done 00:15:13.626 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.626 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 88762 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 88762 ']' 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 88762 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88762 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88762' 00:15:13.884 killing process with pid 88762 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 88762 00:15:13.884 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 88762 00:15:14.143 17:21:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:15:14.143 00:15:14.143 real 0m11.830s 00:15:14.143 user 0m16.983s 00:15:14.143 sys 0m4.207s 00:15:14.143 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.143 17:21:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 ************************************ 00:15:14.143 END TEST bdev_nbd 00:15:14.143 ************************************ 00:15:14.143 17:21:24 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:14.143 17:21:24 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:15:14.143 17:21:24 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:15:14.143 17:21:24 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:15:14.143 17:21:24 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:15:14.143 17:21:24 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:14.143 17:21:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.143 17:21:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 ************************************ 00:15:14.143 START TEST bdev_fio 00:15:14.143 ************************************ 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:14.143 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 ************************************ 00:15:14.143 START TEST bdev_fio_rw_verify 00:15:14.143 ************************************ 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.143 17:21:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:14.402 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:14.402 fio-3.35 00:15:14.402 Starting 6 threads 00:15:26.663 00:15:26.663 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=89191: Mon Jul 15 17:21:35 2024 00:15:26.663 read: IOPS=27.0k, BW=105MiB/s (110MB/s)(1054MiB/10001msec) 00:15:26.663 slat (usec): min=3, max=2657, avg= 7.27, stdev= 8.41 00:15:26.663 clat (usec): min=122, max=9780, avg=685.57, stdev=285.52 00:15:26.663 lat (usec): min=126, max=9790, avg=692.84, stdev=286.31 00:15:26.663 clat percentiles (usec): 00:15:26.663 | 50.000th=[ 685], 99.000th=[ 1434], 99.900th=[ 2507], 99.990th=[ 4686], 00:15:26.663 | 99.999th=[ 9765] 00:15:26.663 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(1068MiB/10001msec); 0 zone resets 00:15:26.663 slat (usec): min=13, max=3422, avg=28.30, stdev=33.97 00:15:26.663 clat (usec): min=105, max=4911, avg=791.77, stdev=292.18 00:15:26.663 lat (usec): min=120, max=4971, avg=820.07, stdev=295.85 00:15:26.663 clat percentiles (usec): 00:15:26.663 | 50.000th=[ 783], 99.000th=[ 1631], 99.900th=[ 2343], 99.990th=[ 3818], 00:15:26.663 | 99.999th=[ 4817] 00:15:26.663 bw ( KiB/s): min=86100, max=137024, per=100.00%, avg=109504.32, stdev=2398.56, samples=114 00:15:26.663 iops : min=21523, max=34256, avg=27375.63, stdev=599.69, samples=114 00:15:26.663 lat (usec) : 250=2.76%, 500=17.59%, 750=32.61%, 1000=31.86% 00:15:26.663 lat (msec) : 2=14.96%, 4=0.20%, 10=0.02% 00:15:26.663 cpu : usr=57.30%, sys=27.88%, ctx=7837, majf=0, minf=23380 00:15:26.663 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.664 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.664 issued rwts: total=269731,273368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.664 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:26.664 00:15:26.664 Run status group 0 (all jobs): 00:15:26.664 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1054MiB (1105MB), run=10001-10001msec 00:15:26.664 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1068MiB (1120MB), run=10001-10001msec 00:15:26.664 ----------------------------------------------------- 00:15:26.664 Suppressions used: 00:15:26.664 count bytes template 00:15:26.664 6 48 /usr/src/fio/parse.c 00:15:26.664 3446 330816 /usr/src/fio/iolog.c 00:15:26.664 1 8 libtcmalloc_minimal.so 00:15:26.664 1 904 libcrypto.so 00:15:26.664 ----------------------------------------------------- 00:15:26.664 00:15:26.664 00:15:26.664 real 0m11.305s 00:15:26.664 user 0m35.200s 00:15:26.664 sys 0m17.089s 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:26.664 ************************************ 00:15:26.664 END TEST bdev_fio_rw_verify 00:15:26.664 ************************************ 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c110de40-1eb8-4ee9-a8ac-de69a7f8a99b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c110de40-1eb8-4ee9-a8ac-de69a7f8a99b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c1e2954a-a663-4370-b426-93020f2ebe28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c1e2954a-a663-4370-b426-93020f2ebe28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "275e5626-084d-4279-ae17-6fabd5e4506b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "275e5626-084d-4279-ae17-6fabd5e4506b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "dbb87424-21a9-4ad4-8e2b-9cfe6515ba60"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbb87424-21a9-4ad4-8e2b-9cfe6515ba60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "89238392-97da-4e1c-8b2a-b9327f0703f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89238392-97da-4e1c-8b2a-b9327f0703f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1462754d-e30b-477c-b44a-e827e35de469"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1462754d-e30b-477c-b44a-e827e35de469",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:26.664 /home/vagrant/spdk_repo/spdk 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:15:26.664 00:15:26.664 real 0m11.493s 00:15:26.664 user 0m35.298s 00:15:26.664 sys 0m17.176s 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.664 17:21:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:26.664 ************************************ 00:15:26.664 END TEST bdev_fio 00:15:26.664 ************************************ 00:15:26.664 17:21:36 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:26.664 17:21:36 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:26.664 17:21:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:26.664 17:21:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:26.664 17:21:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.664 17:21:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.664 ************************************ 00:15:26.664 START TEST bdev_verify 00:15:26.664 ************************************ 00:15:26.664 17:21:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:26.664 [2024-07-15 17:21:36.513529] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:26.664 [2024-07-15 17:21:36.513759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89365 ] 00:15:26.664 [2024-07-15 17:21:36.671333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:26.664 [2024-07-15 17:21:36.686951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:26.664 [2024-07-15 17:21:36.806078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.664 [2024-07-15 17:21:36.806121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.664 Running I/O for 5 seconds... 00:15:31.931 00:15:31.931 Latency(us) 00:15:31.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.931 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0xa0000 00:15:31.931 nvme0n1 : 5.02 1606.35 6.27 0.00 0.00 79542.83 17039.36 70540.57 00:15:31.931 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0xa0000 length 0xa0000 00:15:31.931 nvme0n1 : 5.03 1551.86 6.06 0.00 0.00 82329.30 16205.27 76736.70 00:15:31.931 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0xbd0bd 00:15:31.931 nvme1n1 : 5.06 3149.50 12.30 0.00 0.00 40438.48 5451.40 56956.74 00:15:31.931 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:31.931 nvme1n1 : 5.06 2951.26 11.53 0.00 0.00 43067.71 5928.03 71017.19 00:15:31.931 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0x80000 00:15:31.931 nvme2n1 : 5.06 1617.62 6.32 0.00 0.00 78602.89 6642.97 62437.93 00:15:31.931 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x80000 length 0x80000 00:15:31.931 nvme2n1 : 5.04 1550.05 6.05 0.00 0.00 82004.05 19422.49 66727.56 00:15:31.931 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0x80000 00:15:31.931 nvme2n2 : 5.06 1618.27 6.32 0.00 0.00 78436.51 6851.49 75306.82 00:15:31.931 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x80000 length 0x80000 00:15:31.931 nvme2n2 : 5.07 1564.41 6.11 0.00 0.00 81104.48 5093.93 67680.81 00:15:31.931 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0x80000 00:15:31.931 nvme2n3 : 5.07 1616.74 6.32 0.00 0.00 78374.44 8221.79 67680.81 00:15:31.931 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x80000 length 0x80000 00:15:31.931 nvme2n3 : 5.07 1563.86 6.11 0.00 0.00 80998.38 5719.51 73400.32 00:15:31.931 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x0 length 0x20000 00:15:31.931 nvme3n1 : 5.07 1616.22 6.31 0.00 0.00 78254.35 8936.73 72923.69 00:15:31.931 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.931 Verification LBA range: start 0x20000 length 0x20000 00:15:31.931 nvme3n1 : 5.08 1563.26 6.11 0.00 0.00 80875.92 6583.39 79596.45 00:15:31.931 =================================================================================================================== 00:15:31.931 Total : 21969.39 85.82 0.00 0.00 69378.32 5093.93 79596.45 00:15:31.931 00:15:31.931 real 0m6.136s 00:15:31.931 user 0m9.221s 00:15:31.931 sys 0m1.985s 00:15:31.931 17:21:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.931 17:21:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:31.931 ************************************ 00:15:31.931 END TEST bdev_verify 00:15:31.931 ************************************ 00:15:31.931 17:21:42 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:31.931 17:21:42 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:31.931 17:21:42 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:31.931 17:21:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.931 17:21:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.931 ************************************ 00:15:31.931 START TEST bdev_verify_big_io 00:15:31.931 ************************************ 00:15:31.931 17:21:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:31.931 [2024-07-15 17:21:42.703901] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:31.931 [2024-07-15 17:21:42.704128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89460 ] 00:15:32.189 [2024-07-15 17:21:42.861065] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:32.189 [2024-07-15 17:21:42.879739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:32.189 [2024-07-15 17:21:43.013577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.189 [2024-07-15 17:21:43.013629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.754 Running I/O for 5 seconds... 00:15:39.326 00:15:39.326 Latency(us) 00:15:39.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.326 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0xa000 00:15:39.326 nvme0n1 : 5.97 88.37 5.52 0.00 0.00 1412729.58 174444.92 1952257.86 00:15:39.326 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0xa000 length 0xa000 00:15:39.326 nvme0n1 : 5.98 101.72 6.36 0.00 0.00 1047441.56 9115.46 2669102.55 00:15:39.326 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0xbd0b 00:15:39.326 nvme1n1 : 5.95 161.46 10.09 0.00 0.00 754265.06 10724.07 876990.84 00:15:39.326 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:39.326 nvme1n1 : 5.91 167.78 10.49 0.00 0.00 737932.00 55050.24 991380.95 00:15:39.326 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0x8000 00:15:39.326 nvme2n1 : 5.96 136.80 8.55 0.00 0.00 863055.38 103904.35 949437.91 00:15:39.326 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x8000 length 0x8000 00:15:39.326 nvme2n1 : 5.92 159.58 9.97 0.00 0.00 752156.50 57433.37 983754.94 00:15:39.326 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0x8000 00:15:39.326 nvme2n2 : 5.96 134.34 8.40 0.00 0.00 850386.20 98184.84 968502.92 00:15:39.326 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x8000 length 0x8000 00:15:39.326 nvme2n2 : 5.96 104.70 6.54 0.00 0.00 1106683.06 169678.66 2059021.96 00:15:39.326 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0x8000 00:15:39.326 nvme2n3 : 5.96 107.43 6.71 0.00 0.00 1028546.19 81979.58 1212535.16 00:15:39.326 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x8000 length 0x8000 00:15:39.326 nvme2n3 : 5.97 125.94 7.87 0.00 0.00 901526.59 49330.73 1715851.64 00:15:39.326 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x0 length 0x2000 00:15:39.326 nvme3n1 : 5.97 171.57 10.72 0.00 0.00 627849.89 8757.99 850299.81 00:15:39.326 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:39.326 Verification LBA range: start 0x2000 length 0x2000 00:15:39.326 nvme3n1 : 5.97 115.16 7.20 0.00 0.00 952017.95 19303.33 2211542.11 00:15:39.326 =================================================================================================================== 00:15:39.326 Total : 1574.84 98.43 0.00 0.00 880778.69 8757.99 2669102.55 00:15:39.326 00:15:39.326 real 0m7.044s 00:15:39.326 user 0m12.548s 00:15:39.326 sys 0m0.686s 00:15:39.327 17:21:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.327 17:21:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.327 ************************************ 00:15:39.327 END TEST bdev_verify_big_io 00:15:39.327 ************************************ 00:15:39.327 17:21:49 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:39.327 17:21:49 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.327 17:21:49 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:39.327 17:21:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.327 17:21:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.327 ************************************ 00:15:39.327 START TEST bdev_write_zeroes 00:15:39.327 ************************************ 00:15:39.327 17:21:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.327 [2024-07-15 17:21:49.792576] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:39.327 [2024-07-15 17:21:49.792779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89577 ] 00:15:39.327 [2024-07-15 17:21:49.945857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.327 [2024-07-15 17:21:49.968434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.327 [2024-07-15 17:21:50.063543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.585 Running I/O for 1 seconds... 00:15:40.517 00:15:40.517 Latency(us) 00:15:40.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.517 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme0n1 : 1.02 11297.95 44.13 0.00 0.00 11316.00 7149.38 17873.45 00:15:40.517 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme1n1 : 1.02 16707.50 65.26 0.00 0.00 7644.15 4468.36 14954.12 00:15:40.517 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme2n1 : 1.02 11280.72 44.07 0.00 0.00 11259.72 7089.80 20137.43 00:15:40.517 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme2n2 : 1.02 11263.83 44.00 0.00 0.00 11266.69 7089.80 20137.43 00:15:40.517 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme2n3 : 1.02 11247.21 43.93 0.00 0.00 11272.54 7149.38 19899.11 00:15:40.517 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:40.517 nvme3n1 : 1.03 11230.71 43.87 0.00 0.00 11278.82 7149.38 19779.96 00:15:40.517 =================================================================================================================== 00:15:40.517 Total : 73027.92 285.27 0.00 0.00 10450.79 4468.36 20137.43 00:15:40.775 00:15:40.775 real 0m1.918s 00:15:40.775 user 0m1.132s 00:15:40.775 sys 0m0.612s 00:15:40.775 17:21:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.775 17:21:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:40.775 ************************************ 00:15:40.775 END TEST bdev_write_zeroes 00:15:40.775 ************************************ 00:15:41.033 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:41.033 17:21:51 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.033 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:41.033 17:21:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.033 17:21:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.033 ************************************ 00:15:41.033 START TEST bdev_json_nonenclosed 00:15:41.033 ************************************ 00:15:41.033 17:21:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.033 [2024-07-15 17:21:51.757489] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:41.033 [2024-07-15 17:21:51.757640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89614 ] 00:15:41.291 [2024-07-15 17:21:51.901601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:41.291 [2024-07-15 17:21:51.922001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.291 [2024-07-15 17:21:52.004789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.291 [2024-07-15 17:21:52.004911] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:41.291 [2024-07-15 17:21:52.004946] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:41.291 [2024-07-15 17:21:52.004964] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:41.291 00:15:41.292 real 0m0.457s 00:15:41.292 user 0m0.211s 00:15:41.292 sys 0m0.142s 00:15:41.292 17:21:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:15:41.292 17:21:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.292 17:21:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:41.292 ************************************ 00:15:41.292 END TEST bdev_json_nonenclosed 00:15:41.292 ************************************ 00:15:41.549 17:21:52 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:41.549 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:15:41.549 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.549 17:21:52 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:41.549 17:21:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.550 17:21:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.550 ************************************ 00:15:41.550 START TEST bdev_json_nonarray 00:15:41.550 ************************************ 00:15:41.550 17:21:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.550 [2024-07-15 17:21:52.271876] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:41.550 [2024-07-15 17:21:52.272088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89640 ] 00:15:41.808 [2024-07-15 17:21:52.425530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:41.808 [2024-07-15 17:21:52.447548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.808 [2024-07-15 17:21:52.548449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.808 [2024-07-15 17:21:52.548616] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:41.808 [2024-07-15 17:21:52.548678] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:41.808 [2024-07-15 17:21:52.548700] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:42.066 00:15:42.066 real 0m0.511s 00:15:42.066 user 0m0.237s 00:15:42.066 sys 0m0.169s 00:15:42.066 17:21:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:15:42.066 17:21:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.066 17:21:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:42.066 ************************************ 00:15:42.066 END TEST bdev_json_nonarray 00:15:42.066 ************************************ 00:15:42.066 17:21:52 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:42.066 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.067 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:42.067 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:42.067 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:42.067 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:42.067 17:21:52 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:42.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.733 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.733 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.733 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.733 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:50.733 00:15:50.733 real 0m58.152s 00:15:50.733 user 1m28.659s 00:15:50.733 sys 0m40.517s 00:15:50.733 17:22:00 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.734 17:22:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.734 ************************************ 00:15:50.734 END TEST blockdev_xnvme 00:15:50.734 ************************************ 00:15:50.734 17:22:00 -- common/autotest_common.sh@1142 -- # return 0 00:15:50.734 17:22:00 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:50.734 17:22:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.734 17:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.734 17:22:00 -- common/autotest_common.sh@10 -- # set +x 00:15:50.734 ************************************ 00:15:50.734 START TEST ublk 00:15:50.734 ************************************ 00:15:50.734 17:22:00 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:50.734 * Looking for test storage... 00:15:50.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:50.734 17:22:01 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:50.734 17:22:01 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:50.734 17:22:01 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:50.734 17:22:01 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:50.734 17:22:01 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:50.734 17:22:01 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:50.734 17:22:01 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:50.734 17:22:01 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:50.734 17:22:01 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:50.734 17:22:01 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.734 17:22:01 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.734 17:22:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.734 ************************************ 00:15:50.734 START TEST test_save_ublk_config 00:15:50.734 ************************************ 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=89934 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 89934 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 89934 ']' 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.734 17:22:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.734 [2024-07-15 17:22:01.203435] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:50.734 [2024-07-15 17:22:01.203623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89934 ] 00:15:50.734 [2024-07-15 17:22:01.356327] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.734 [2024-07-15 17:22:01.379893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.734 [2024-07-15 17:22:01.490460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.299 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.299 [2024-07-15 17:22:02.122387] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:51.299 [2024-07-15 17:22:02.122824] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:51.299 malloc0 00:15:51.558 [2024-07-15 17:22:02.162547] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:51.558 [2024-07-15 17:22:02.162673] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:51.558 [2024-07-15 17:22:02.162698] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:51.558 [2024-07-15 17:22:02.162718] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:51.558 [2024-07-15 17:22:02.171488] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:51.558 [2024-07-15 17:22:02.171534] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:51.558 [2024-07-15 17:22:02.178393] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:51.558 [2024-07-15 17:22:02.178546] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:51.558 [2024-07-15 17:22:02.195391] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:51.558 0 00:15:51.558 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.558 17:22:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:51.558 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.558 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:51.815 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.815 17:22:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:51.815 "subsystems": [ 00:15:51.815 { 00:15:51.815 "subsystem": "keyring", 00:15:51.815 "config": [] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "iobuf", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "iobuf_set_options", 00:15:51.815 "params": { 00:15:51.815 "small_pool_count": 8192, 00:15:51.815 "large_pool_count": 1024, 00:15:51.815 "small_bufsize": 8192, 00:15:51.815 "large_bufsize": 135168 00:15:51.815 } 00:15:51.815 } 00:15:51.815 ] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "sock", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "sock_set_default_impl", 00:15:51.815 "params": { 00:15:51.815 "impl_name": "posix" 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "sock_impl_set_options", 00:15:51.815 "params": { 00:15:51.815 "impl_name": "ssl", 00:15:51.815 "recv_buf_size": 4096, 00:15:51.815 "send_buf_size": 4096, 00:15:51.815 "enable_recv_pipe": true, 00:15:51.815 "enable_quickack": false, 00:15:51.815 "enable_placement_id": 0, 00:15:51.815 "enable_zerocopy_send_server": true, 00:15:51.815 "enable_zerocopy_send_client": false, 00:15:51.815 "zerocopy_threshold": 0, 00:15:51.815 "tls_version": 0, 00:15:51.815 "enable_ktls": false 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "sock_impl_set_options", 00:15:51.815 "params": { 00:15:51.815 "impl_name": "posix", 00:15:51.815 "recv_buf_size": 2097152, 00:15:51.815 "send_buf_size": 2097152, 00:15:51.815 "enable_recv_pipe": true, 00:15:51.815 "enable_quickack": false, 00:15:51.815 "enable_placement_id": 0, 00:15:51.815 "enable_zerocopy_send_server": true, 00:15:51.815 "enable_zerocopy_send_client": false, 00:15:51.815 "zerocopy_threshold": 0, 00:15:51.815 "tls_version": 0, 00:15:51.815 "enable_ktls": false 00:15:51.815 } 00:15:51.815 } 00:15:51.815 ] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "vmd", 00:15:51.815 "config": [] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "accel", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "accel_set_options", 00:15:51.815 "params": { 00:15:51.815 "small_cache_size": 128, 00:15:51.815 "large_cache_size": 16, 00:15:51.815 "task_count": 2048, 00:15:51.815 "sequence_count": 2048, 00:15:51.815 "buf_count": 2048 00:15:51.815 } 00:15:51.815 } 00:15:51.815 ] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "bdev", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "bdev_set_options", 00:15:51.815 "params": { 00:15:51.815 "bdev_io_pool_size": 65535, 00:15:51.815 "bdev_io_cache_size": 256, 00:15:51.815 "bdev_auto_examine": true, 00:15:51.815 "iobuf_small_cache_size": 128, 00:15:51.815 "iobuf_large_cache_size": 16 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_raid_set_options", 00:15:51.815 "params": { 00:15:51.815 "process_window_size_kb": 1024 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_iscsi_set_options", 00:15:51.815 "params": { 00:15:51.815 "timeout_sec": 30 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_nvme_set_options", 00:15:51.815 "params": { 00:15:51.815 "action_on_timeout": "none", 00:15:51.815 "timeout_us": 0, 00:15:51.815 "timeout_admin_us": 0, 00:15:51.815 "keep_alive_timeout_ms": 10000, 00:15:51.815 "arbitration_burst": 0, 00:15:51.815 "low_priority_weight": 0, 00:15:51.815 "medium_priority_weight": 0, 00:15:51.815 "high_priority_weight": 0, 00:15:51.815 "nvme_adminq_poll_period_us": 10000, 00:15:51.815 "nvme_ioq_poll_period_us": 0, 00:15:51.815 "io_queue_requests": 0, 00:15:51.815 "delay_cmd_submit": true, 00:15:51.815 "transport_retry_count": 4, 00:15:51.815 "bdev_retry_count": 3, 00:15:51.815 "transport_ack_timeout": 0, 00:15:51.815 "ctrlr_loss_timeout_sec": 0, 00:15:51.815 "reconnect_delay_sec": 0, 00:15:51.815 "fast_io_fail_timeout_sec": 0, 00:15:51.815 "disable_auto_failback": false, 00:15:51.815 "generate_uuids": false, 00:15:51.815 "transport_tos": 0, 00:15:51.815 "nvme_error_stat": false, 00:15:51.815 "rdma_srq_size": 0, 00:15:51.815 "io_path_stat": false, 00:15:51.815 "allow_accel_sequence": false, 00:15:51.815 "rdma_max_cq_size": 0, 00:15:51.815 "rdma_cm_event_timeout_ms": 0, 00:15:51.815 "dhchap_digests": [ 00:15:51.815 "sha256", 00:15:51.815 "sha384", 00:15:51.815 "sha512" 00:15:51.815 ], 00:15:51.815 "dhchap_dhgroups": [ 00:15:51.815 "null", 00:15:51.815 "ffdhe2048", 00:15:51.815 "ffdhe3072", 00:15:51.815 "ffdhe4096", 00:15:51.815 "ffdhe6144", 00:15:51.815 "ffdhe8192" 00:15:51.815 ] 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_nvme_set_hotplug", 00:15:51.815 "params": { 00:15:51.815 "period_us": 100000, 00:15:51.815 "enable": false 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_malloc_create", 00:15:51.815 "params": { 00:15:51.815 "name": "malloc0", 00:15:51.815 "num_blocks": 8192, 00:15:51.815 "block_size": 4096, 00:15:51.815 "physical_block_size": 4096, 00:15:51.815 "uuid": "b3810c07-ddb6-499a-8ab0-54ff0336a3c7", 00:15:51.815 "optimal_io_boundary": 0 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "bdev_wait_for_examine" 00:15:51.815 } 00:15:51.815 ] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "scsi", 00:15:51.815 "config": null 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "scheduler", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "framework_set_scheduler", 00:15:51.815 "params": { 00:15:51.815 "name": "static" 00:15:51.815 } 00:15:51.815 } 00:15:51.815 ] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "vhost_scsi", 00:15:51.815 "config": [] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "vhost_blk", 00:15:51.815 "config": [] 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "subsystem": "ublk", 00:15:51.815 "config": [ 00:15:51.815 { 00:15:51.815 "method": "ublk_create_target", 00:15:51.815 "params": { 00:15:51.815 "cpumask": "1" 00:15:51.815 } 00:15:51.815 }, 00:15:51.815 { 00:15:51.815 "method": "ublk_start_disk", 00:15:51.815 "params": { 00:15:51.815 "bdev_name": "malloc0", 00:15:51.815 "ublk_id": 0, 00:15:51.815 "num_queues": 1, 00:15:51.816 "queue_depth": 128 00:15:51.816 } 00:15:51.816 } 00:15:51.816 ] 00:15:51.816 }, 00:15:51.816 { 00:15:51.816 "subsystem": "nbd", 00:15:51.816 "config": [] 00:15:51.816 }, 00:15:51.816 { 00:15:51.816 "subsystem": "nvmf", 00:15:51.816 "config": [ 00:15:51.816 { 00:15:51.816 "method": "nvmf_set_config", 00:15:51.816 "params": { 00:15:51.816 "discovery_filter": "match_any", 00:15:51.816 "admin_cmd_passthru": { 00:15:51.816 "identify_ctrlr": false 00:15:51.816 } 00:15:51.816 } 00:15:51.816 }, 00:15:51.816 { 00:15:51.816 "method": "nvmf_set_max_subsystems", 00:15:51.816 "params": { 00:15:51.816 "max_subsystems": 1024 00:15:51.816 } 00:15:51.816 }, 00:15:51.816 { 00:15:51.816 "method": "nvmf_set_crdt", 00:15:51.816 "params": { 00:15:51.816 "crdt1": 0, 00:15:51.816 "crdt2": 0, 00:15:51.816 "crdt3": 0 00:15:51.816 } 00:15:51.816 } 00:15:51.816 ] 00:15:51.816 }, 00:15:51.816 { 00:15:51.816 "subsystem": "iscsi", 00:15:51.816 "config": [ 00:15:51.816 { 00:15:51.816 "method": "iscsi_set_options", 00:15:51.816 "params": { 00:15:51.816 "node_base": "iqn.2016-06.io.spdk", 00:15:51.816 "max_sessions": 128, 00:15:51.816 "max_connections_per_session": 2, 00:15:51.816 "max_queue_depth": 64, 00:15:51.816 "default_time2wait": 2, 00:15:51.816 "default_time2retain": 20, 00:15:51.816 "first_burst_length": 8192, 00:15:51.816 "immediate_data": true, 00:15:51.816 "allow_duplicated_isid": false, 00:15:51.816 "error_recovery_level": 0, 00:15:51.816 "nop_timeout": 60, 00:15:51.816 "nop_in_interval": 30, 00:15:51.816 "disable_chap": false, 00:15:51.816 "require_chap": false, 00:15:51.816 "mutual_chap": false, 00:15:51.816 "chap_group": 0, 00:15:51.816 "max_large_datain_per_connection": 64, 00:15:51.816 "max_r2t_per_connection": 4, 00:15:51.816 "pdu_pool_size": 36864, 00:15:51.816 "immediate_data_pool_size": 16384, 00:15:51.816 "data_out_pool_size": 2048 00:15:51.816 } 00:15:51.816 } 00:15:51.816 ] 00:15:51.816 } 00:15:51.816 ] 00:15:51.816 }' 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 89934 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 89934 ']' 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 89934 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89934 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89934' 00:15:51.816 killing process with pid 89934 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 89934 00:15:51.816 17:22:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 89934 00:15:52.074 [2024-07-15 17:22:02.798934] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:52.074 [2024-07-15 17:22:02.845402] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:52.074 [2024-07-15 17:22:02.845621] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:52.074 [2024-07-15 17:22:02.853418] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:52.074 [2024-07-15 17:22:02.853491] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:52.074 [2024-07-15 17:22:02.853506] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:52.074 [2024-07-15 17:22:02.853539] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.074 [2024-07-15 17:22:02.853725] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=89971 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 89971 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 89971 ']' 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.332 17:22:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:52.332 "subsystems": [ 00:15:52.332 { 00:15:52.332 "subsystem": "keyring", 00:15:52.332 "config": [] 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "subsystem": "iobuf", 00:15:52.332 "config": [ 00:15:52.332 { 00:15:52.332 "method": "iobuf_set_options", 00:15:52.332 "params": { 00:15:52.332 "small_pool_count": 8192, 00:15:52.332 "large_pool_count": 1024, 00:15:52.332 "small_bufsize": 8192, 00:15:52.332 "large_bufsize": 135168 00:15:52.332 } 00:15:52.332 } 00:15:52.332 ] 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "subsystem": "sock", 00:15:52.332 "config": [ 00:15:52.332 { 00:15:52.332 "method": "sock_set_default_impl", 00:15:52.332 "params": { 00:15:52.332 "impl_name": "posix" 00:15:52.332 } 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "method": "sock_impl_set_options", 00:15:52.332 "params": { 00:15:52.332 "impl_name": "ssl", 00:15:52.332 "recv_buf_size": 4096, 00:15:52.332 "send_buf_size": 4096, 00:15:52.332 "enable_recv_pipe": true, 00:15:52.332 "enable_quickack": false, 00:15:52.332 "enable_placement_id": 0, 00:15:52.332 "enable_zerocopy_send_server": true, 00:15:52.332 "enable_zerocopy_send_client": false, 00:15:52.332 "zerocopy_threshold": 0, 00:15:52.332 "tls_version": 0, 00:15:52.332 "enable_ktls": false 00:15:52.332 } 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "method": "sock_impl_set_options", 00:15:52.332 "params": { 00:15:52.332 "impl_name": "posix", 00:15:52.332 "recv_buf_size": 2097152, 00:15:52.332 "send_buf_size": 2097152, 00:15:52.332 "enable_recv_pipe": true, 00:15:52.332 "enable_quickack": false, 00:15:52.332 "enable_placement_id": 0, 00:15:52.332 "enable_zerocopy_send_server": true, 00:15:52.332 "enable_zerocopy_send_client": false, 00:15:52.332 "zerocopy_threshold": 0, 00:15:52.332 "tls_version": 0, 00:15:52.332 "enable_ktls": false 00:15:52.332 } 00:15:52.332 } 00:15:52.332 ] 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "subsystem": "vmd", 00:15:52.332 "config": [] 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "subsystem": "accel", 00:15:52.332 "config": [ 00:15:52.332 { 00:15:52.332 "method": "accel_set_options", 00:15:52.332 "params": { 00:15:52.332 "small_cache_size": 128, 00:15:52.332 "large_cache_size": 16, 00:15:52.332 "task_count": 2048, 00:15:52.332 "sequence_count": 2048, 00:15:52.332 "buf_count": 2048 00:15:52.332 } 00:15:52.332 } 00:15:52.332 ] 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "subsystem": "bdev", 00:15:52.332 "config": [ 00:15:52.332 { 00:15:52.332 "method": "bdev_set_options", 00:15:52.332 "params": { 00:15:52.332 "bdev_io_pool_size": 65535, 00:15:52.332 "bdev_io_cache_size": 256, 00:15:52.332 "bdev_auto_examine": true, 00:15:52.332 "iobuf_small_cache_size": 128, 00:15:52.332 "iobuf_large_cache_size": 16 00:15:52.332 } 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "method": "bdev_raid_set_options", 00:15:52.332 "params": { 00:15:52.332 "process_window_size_kb": 1024 00:15:52.332 } 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "method": "bdev_iscsi_set_options", 00:15:52.332 "params": { 00:15:52.332 "timeout_sec": 30 00:15:52.332 } 00:15:52.332 }, 00:15:52.332 { 00:15:52.332 "method": "bdev_nvme_set_options", 00:15:52.332 "params": { 00:15:52.332 "action_on_timeout": "none", 00:15:52.332 "timeout_us": 0, 00:15:52.332 "timeout_admin_us": 0, 00:15:52.332 "keep_alive_timeout_ms": 10000, 00:15:52.332 "arbitration_burst": 0, 00:15:52.332 "low_priority_weight": 0, 00:15:52.332 "medium_priority_weight": 0, 00:15:52.332 "high_priority_weight": 0, 00:15:52.332 "nvme_adminq_poll_period_us": 10000, 00:15:52.332 "nvme_ioq_poll_period_us": 0, 00:15:52.332 "io_queue_requests": 0, 00:15:52.332 "delay_cmd_submit": true, 00:15:52.332 "transport_retry_count": 4, 00:15:52.332 "bdev_retry_count": 3, 00:15:52.332 "transport_ack_timeout": 0, 00:15:52.332 "ctrlr_loss_timeout_sec": 0, 00:15:52.332 "reconnect_delay_sec": 0, 00:15:52.332 "fast_io_fail_timeout_sec": 0, 00:15:52.332 "disable_auto_failback": false, 00:15:52.332 "generate_uuids": false, 00:15:52.332 "transport_tos": 0, 00:15:52.332 "nvme_error_stat": false, 00:15:52.332 "rdma_srq_size": 0, 00:15:52.332 "io_path_stat": false, 00:15:52.332 "allow_accel_sequence": false, 00:15:52.332 "rdma_max_cq_size": 0, 00:15:52.332 "rdma_cm_event_timeout_ms": 0, 00:15:52.332 "dhchap_digests": [ 00:15:52.333 "sha256", 00:15:52.333 "sha384", 00:15:52.333 "sha512" 00:15:52.333 ], 00:15:52.333 "dhchap_dhgroups": [ 00:15:52.333 "null", 00:15:52.333 "ffdhe2048", 00:15:52.333 "ffdhe3072", 00:15:52.333 "ffdhe4096", 00:15:52.333 "ffdhe6144", 00:15:52.333 "ffdhe8192" 00:15:52.333 ] 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "bdev_nvme_set_hotplug", 00:15:52.333 "params": { 00:15:52.333 "period_us": 100000, 00:15:52.333 "enable": false 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "bdev_malloc_create", 00:15:52.333 "params": { 00:15:52.333 "name": "malloc0", 00:15:52.333 "num_blocks": 8192, 00:15:52.333 "block_size": 4096, 00:15:52.333 "physical_block_size": 4096, 00:15:52.333 "uuid": "b3810c07-ddb6-499a-8ab0-54ff0336a3c7", 00:15:52.333 "optimal_io_boundary": 0 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "bdev_wait_for_examine" 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "scsi", 00:15:52.333 "config": null 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "scheduler", 00:15:52.333 "config": [ 00:15:52.333 { 00:15:52.333 "method": "framework_set_scheduler", 00:15:52.333 "params": { 00:15:52.333 "name": "static" 00:15:52.333 } 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "vhost_scsi", 00:15:52.333 "config": [] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "vhost_blk", 00:15:52.333 "config": [] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "ublk", 00:15:52.333 "config": [ 00:15:52.333 { 00:15:52.333 "method": "ublk_create_target", 00:15:52.333 "params": { 00:15:52.333 "cpumask": "1" 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "ublk_start_disk", 00:15:52.333 "params": { 00:15:52.333 "bdev_name": "malloc0", 00:15:52.333 "ublk_id": 0, 00:15:52.333 "num_queues": 1, 00:15:52.333 "queue_depth": 128 00:15:52.333 } 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "nbd", 00:15:52.333 "config": [] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "nvmf", 00:15:52.333 "config": [ 00:15:52.333 { 00:15:52.333 "method": "nvmf_set_config", 00:15:52.333 "params": { 00:15:52.333 "discovery_filter": "match_any", 00:15:52.333 "admin_cmd_passthru": { 00:15:52.333 "identify_ctrlr": false 00:15:52.333 } 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "nvmf_set_max_subsystems", 00:15:52.333 "params": { 00:15:52.333 "max_subsystems": 1024 00:15:52.333 } 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "method": "nvmf_set_crdt", 00:15:52.333 "params": { 00:15:52.333 "crdt1": 0, 00:15:52.333 "crdt2": 0, 00:15:52.333 "crdt3": 0 00:15:52.333 } 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 }, 00:15:52.333 { 00:15:52.333 "subsystem": "iscsi", 00:15:52.333 "config": [ 00:15:52.333 { 00:15:52.333 "method": "iscsi_set_options", 00:15:52.333 "params": { 00:15:52.333 "node_base": "iqn.2016-06.io.spdk", 00:15:52.333 "max_sessions": 128, 00:15:52.333 "max_connections_per_session": 2, 00:15:52.333 "max_queue_depth": 64, 00:15:52.333 "default_time2wait": 2, 00:15:52.333 "default_time2retain": 20, 00:15:52.333 "first_burst_length": 8192, 00:15:52.333 "immediate_data": true, 00:15:52.333 "allow_duplicated_isid": false, 00:15:52.333 "error_recovery_level": 0, 00:15:52.333 "nop_timeout": 60, 00:15:52.333 "nop_in_interval": 30, 00:15:52.333 "disable_chap": false, 00:15:52.333 "require_chap": false, 00:15:52.333 "mutual_chap": false, 00:15:52.333 "chap_group": 0, 00:15:52.333 "max_large_datain_per_connection": 64, 00:15:52.333 "max_r2t_per_connection": 4, 00:15:52.333 "pdu_pool_size": 36864, 00:15:52.333 "immediate_data_pool_size": 16384, 00:15:52.333 "data_out_pool_size": 2048 00:15:52.333 } 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 } 00:15:52.333 ] 00:15:52.333 }' 00:15:52.333 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.333 17:22:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.589 [2024-07-15 17:22:03.263182] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:52.589 [2024-07-15 17:22:03.263394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89971 ] 00:15:52.589 [2024-07-15 17:22:03.416476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:52.589 [2024-07-15 17:22:03.437871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.846 [2024-07-15 17:22:03.536071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.103 [2024-07-15 17:22:03.925385] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:53.103 [2024-07-15 17:22:03.925794] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.103 [2024-07-15 17:22:03.933518] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:53.103 [2024-07-15 17:22:03.933650] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:53.103 [2024-07-15 17:22:03.933674] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:53.103 [2024-07-15 17:22:03.933693] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:53.103 [2024-07-15 17:22:03.942460] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:53.103 [2024-07-15 17:22:03.942487] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:53.103 [2024-07-15 17:22:03.949392] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:53.103 [2024-07-15 17:22:03.949506] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:53.360 [2024-07-15 17:22:03.966386] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:53.360 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 89971 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 89971 ']' 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 89971 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89971 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.617 killing process with pid 89971 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89971' 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 89971 00:15:53.617 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 89971 00:15:53.875 [2024-07-15 17:22:04.619790] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:53.875 [2024-07-15 17:22:04.653413] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:53.875 [2024-07-15 17:22:04.657389] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:53.875 [2024-07-15 17:22:04.668399] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:53.875 [2024-07-15 17:22:04.668471] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:53.875 [2024-07-15 17:22:04.668484] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:53.875 [2024-07-15 17:22:04.668514] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:53.875 [2024-07-15 17:22:04.668705] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:54.132 17:22:04 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:54.133 00:15:54.133 real 0m3.878s 00:15:54.133 user 0m3.102s 00:15:54.133 sys 0m1.691s 00:15:54.133 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.133 ************************************ 00:15:54.133 END TEST test_save_ublk_config 00:15:54.133 ************************************ 00:15:54.133 17:22:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 17:22:04 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:54.390 17:22:05 ublk -- ublk/ublk.sh@139 -- # spdk_pid=90023 00:15:54.390 17:22:05 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:54.390 17:22:05 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.390 17:22:05 ublk -- ublk/ublk.sh@141 -- # waitforlisten 90023 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@829 -- # '[' -z 90023 ']' 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.390 17:22:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 [2024-07-15 17:22:05.094843] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:54.390 [2024-07-15 17:22:05.095018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90023 ] 00:15:54.390 [2024-07-15 17:22:05.239504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:54.647 [2024-07-15 17:22:05.260871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.647 [2024-07-15 17:22:05.356450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.647 [2024-07-15 17:22:05.356496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.212 17:22:06 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.212 17:22:06 ublk -- common/autotest_common.sh@862 -- # return 0 00:15:55.212 17:22:06 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:55.212 17:22:06 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:55.212 17:22:06 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.212 17:22:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.212 ************************************ 00:15:55.212 START TEST test_create_ublk 00:15:55.212 ************************************ 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:15:55.212 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.212 [2024-07-15 17:22:06.050444] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:55.212 [2024-07-15 17:22:06.052315] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.212 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:55.212 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.212 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 [2024-07-15 17:22:06.138613] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:55.470 [2024-07-15 17:22:06.139134] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:55.470 [2024-07-15 17:22:06.139162] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:55.470 [2024-07-15 17:22:06.139176] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:55.470 [2024-07-15 17:22:06.147745] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:55.470 [2024-07-15 17:22:06.147795] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:55.470 [2024-07-15 17:22:06.154393] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:55.470 [2024-07-15 17:22:06.164457] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:55.470 [2024-07-15 17:22:06.188396] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 17:22:06 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:55.470 { 00:15:55.470 "ublk_device": "/dev/ublkb0", 00:15:55.470 "id": 0, 00:15:55.470 "queue_depth": 512, 00:15:55.470 "num_queues": 4, 00:15:55.470 "bdev_name": "Malloc0" 00:15:55.470 } 00:15:55.470 ]' 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:55.470 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:55.727 17:22:06 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:55.727 17:22:06 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:55.728 17:22:06 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:55.728 fio: verification read phase will never start because write phase uses all of runtime 00:15:55.728 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:55.728 fio-3.35 00:15:55.728 Starting 1 process 00:16:07.971 00:16:07.971 fio_test: (groupid=0, jobs=1): err= 0: pid=90062: Mon Jul 15 17:22:16 2024 00:16:07.971 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(409MiB/10001msec); 0 zone resets 00:16:07.971 clat (usec): min=57, max=4074, avg=94.08, stdev=133.80 00:16:07.971 lat (usec): min=57, max=4075, avg=94.80, stdev=133.81 00:16:07.971 clat percentiles (usec): 00:16:07.971 | 1.00th=[ 61], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 74], 00:16:07.971 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 80], 00:16:07.971 | 70.00th=[ 88], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 129], 00:16:07.971 | 99.00th=[ 151], 99.50th=[ 169], 99.90th=[ 2737], 99.95th=[ 3228], 00:16:07.971 | 99.99th=[ 3785] 00:16:07.971 bw ( KiB/s): min=30416, max=51136, per=99.62%, avg=41747.37, stdev=7720.00, samples=19 00:16:07.971 iops : min= 7604, max=12784, avg=10436.84, stdev=1930.00, samples=19 00:16:07.971 lat (usec) : 100=74.84%, 250=24.77%, 500=0.03%, 750=0.03%, 1000=0.03% 00:16:07.971 lat (msec) : 2=0.11%, 4=0.19%, 10=0.01% 00:16:07.971 cpu : usr=2.79%, sys=7.46%, ctx=104776, majf=0, minf=794 00:16:07.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.971 issued rwts: total=0,104775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.971 00:16:07.971 Run status group 0 (all jobs): 00:16:07.971 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=409MiB (429MB), run=10001-10001msec 00:16:07.971 00:16:07.971 Disk stats (read/write): 00:16:07.971 ublkb0: ios=0/103711, merge=0/0, ticks=0/8938, in_queue=8939, util=99.08% 00:16:07.971 17:22:16 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:07.971 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.971 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.971 [2024-07-15 17:22:16.700159] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:07.971 [2024-07-15 17:22:16.735922] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:07.972 [2024-07-15 17:22:16.737305] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:07.972 [2024-07-15 17:22:16.742408] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:07.972 [2024-07-15 17:22:16.742741] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:07.972 [2024-07-15 17:22:16.742764] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:16.758519] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:07.972 request: 00:16:07.972 { 00:16:07.972 "ublk_id": 0, 00:16:07.972 "method": "ublk_stop_disk", 00:16:07.972 "req_id": 1 00:16:07.972 } 00:16:07.972 Got JSON-RPC error response 00:16:07.972 response: 00:16:07.972 { 00:16:07.972 "code": -19, 00:16:07.972 "message": "No such device" 00:16:07.972 } 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.972 17:22:16 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:16.774476] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:07.972 [2024-07-15 17:22:16.776688] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:07.972 [2024-07-15 17:22:16.776765] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:07.972 17:22:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:07.972 00:16:07.972 real 0m10.927s 00:16:07.972 user 0m0.698s 00:16:07.972 sys 0m0.859s 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.972 ************************************ 00:16:07.972 17:22:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 END TEST test_create_ublk 00:16:07.972 ************************************ 00:16:07.972 17:22:17 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:07.972 17:22:17 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:07.972 17:22:17 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:07.972 17:22:17 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.972 17:22:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 ************************************ 00:16:07.972 START TEST test_create_multi_ublk 00:16:07.972 ************************************ 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:17.022388] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:07.972 [2024-07-15 17:22:17.024114] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:17.117558] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:07.972 [2024-07-15 17:22:17.118090] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:07.972 [2024-07-15 17:22:17.118116] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:07.972 [2024-07-15 17:22:17.118126] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.972 [2024-07-15 17:22:17.126690] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.972 [2024-07-15 17:22:17.126717] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.972 [2024-07-15 17:22:17.132417] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.972 [2024-07-15 17:22:17.133182] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:07.972 [2024-07-15 17:22:17.148440] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:17.243573] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:07.972 [2024-07-15 17:22:17.244109] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:07.972 [2024-07-15 17:22:17.244131] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:07.972 [2024-07-15 17:22:17.244145] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.972 [2024-07-15 17:22:17.252725] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.972 [2024-07-15 17:22:17.252761] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.972 [2024-07-15 17:22:17.259394] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.972 [2024-07-15 17:22:17.260288] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:07.972 [2024-07-15 17:22:17.268452] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.972 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.972 [2024-07-15 17:22:17.371544] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:07.972 [2024-07-15 17:22:17.372075] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:07.972 [2024-07-15 17:22:17.372105] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:07.972 [2024-07-15 17:22:17.372116] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.972 [2024-07-15 17:22:17.379444] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.972 [2024-07-15 17:22:17.379479] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.972 [2024-07-15 17:22:17.387456] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.973 [2024-07-15 17:22:17.388304] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:07.973 [2024-07-15 17:22:17.408396] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 [2024-07-15 17:22:17.500724] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:07.973 [2024-07-15 17:22:17.501278] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:07.973 [2024-07-15 17:22:17.501300] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:07.973 [2024-07-15 17:22:17.501314] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.973 [2024-07-15 17:22:17.507388] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.973 [2024-07-15 17:22:17.507422] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.973 [2024-07-15 17:22:17.515391] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.973 [2024-07-15 17:22:17.516142] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:07.973 [2024-07-15 17:22:17.519209] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:07.973 { 00:16:07.973 "ublk_device": "/dev/ublkb0", 00:16:07.973 "id": 0, 00:16:07.973 "queue_depth": 512, 00:16:07.973 "num_queues": 4, 00:16:07.973 "bdev_name": "Malloc0" 00:16:07.973 }, 00:16:07.973 { 00:16:07.973 "ublk_device": "/dev/ublkb1", 00:16:07.973 "id": 1, 00:16:07.973 "queue_depth": 512, 00:16:07.973 "num_queues": 4, 00:16:07.973 "bdev_name": "Malloc1" 00:16:07.973 }, 00:16:07.973 { 00:16:07.973 "ublk_device": "/dev/ublkb2", 00:16:07.973 "id": 2, 00:16:07.973 "queue_depth": 512, 00:16:07.973 "num_queues": 4, 00:16:07.973 "bdev_name": "Malloc2" 00:16:07.973 }, 00:16:07.973 { 00:16:07.973 "ublk_device": "/dev/ublkb3", 00:16:07.973 "id": 3, 00:16:07.973 "queue_depth": 512, 00:16:07.973 "num_queues": 4, 00:16:07.973 "bdev_name": "Malloc3" 00:16:07.973 } 00:16:07.973 ]' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.973 17:22:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 [2024-07-15 17:22:18.550586] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:07.973 [2024-07-15 17:22:18.585961] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:07.973 [2024-07-15 17:22:18.587472] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:07.973 [2024-07-15 17:22:18.593416] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:07.973 [2024-07-15 17:22:18.593746] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:07.973 [2024-07-15 17:22:18.593767] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 [2024-07-15 17:22:18.609498] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:07.973 [2024-07-15 17:22:18.641431] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:07.973 [2024-07-15 17:22:18.642733] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:07.973 [2024-07-15 17:22:18.649402] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:07.973 [2024-07-15 17:22:18.649736] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:07.973 [2024-07-15 17:22:18.649757] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 [2024-07-15 17:22:18.665556] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:07.973 [2024-07-15 17:22:18.705455] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:07.973 [2024-07-15 17:22:18.706712] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:07.973 [2024-07-15 17:22:18.713400] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:07.973 [2024-07-15 17:22:18.713735] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:07.973 [2024-07-15 17:22:18.713758] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.973 [2024-07-15 17:22:18.729529] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:07.973 [2024-07-15 17:22:18.769986] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:07.973 [2024-07-15 17:22:18.771354] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:07.973 [2024-07-15 17:22:18.777399] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:07.973 [2024-07-15 17:22:18.777757] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:07.973 [2024-07-15 17:22:18.777787] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.973 17:22:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:08.232 [2024-07-15 17:22:19.057514] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:08.232 [2024-07-15 17:22:19.059411] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:08.232 [2024-07-15 17:22:19.059463] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:08.232 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:08.493 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:08.752 ************************************ 00:16:08.752 END TEST test_create_multi_ublk 00:16:08.752 ************************************ 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:08.752 00:16:08.752 real 0m2.436s 00:16:08.752 user 0m1.290s 00:16:08.752 sys 0m0.176s 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.752 17:22:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:08.752 17:22:19 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:08.752 17:22:19 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:08.752 17:22:19 ublk -- ublk/ublk.sh@130 -- # killprocess 90023 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@948 -- # '[' -z 90023 ']' 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@952 -- # kill -0 90023 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@953 -- # uname 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90023 00:16:08.752 killing process with pid 90023 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90023' 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@967 -- # kill 90023 00:16:08.752 17:22:19 ublk -- common/autotest_common.sh@972 -- # wait 90023 00:16:09.010 [2024-07-15 17:22:19.689046] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.010 [2024-07-15 17:22:19.689148] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:09.268 ************************************ 00:16:09.268 END TEST ublk 00:16:09.268 ************************************ 00:16:09.268 00:16:09.268 real 0m18.978s 00:16:09.268 user 0m30.158s 00:16:09.268 sys 0m7.548s 00:16:09.268 17:22:19 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.268 17:22:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.268 17:22:19 -- common/autotest_common.sh@1142 -- # return 0 00:16:09.268 17:22:19 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:09.268 17:22:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.268 17:22:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.268 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:16:09.268 ************************************ 00:16:09.268 START TEST ublk_recovery 00:16:09.268 ************************************ 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:09.268 * Looking for test storage... 00:16:09.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:09.268 17:22:20 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=90365 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.268 17:22:20 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 90365 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 90365 ']' 00:16:09.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.268 17:22:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.526 [2024-07-15 17:22:20.221500] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:09.526 [2024-07-15 17:22:20.222451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90365 ] 00:16:09.526 [2024-07-15 17:22:20.376472] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:09.784 [2024-07-15 17:22:20.397916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.784 [2024-07-15 17:22:20.485284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.784 [2024-07-15 17:22:20.485336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:10.352 17:22:21 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.352 [2024-07-15 17:22:21.196386] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:10.352 [2024-07-15 17:22:21.198319] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.352 17:22:21 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.352 17:22:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.611 malloc0 00:16:10.611 17:22:21 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.611 17:22:21 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:10.611 17:22:21 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.611 17:22:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.611 [2024-07-15 17:22:21.252577] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:10.611 [2024-07-15 17:22:21.252737] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:10.611 [2024-07-15 17:22:21.252757] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:10.611 [2024-07-15 17:22:21.252769] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.611 [2024-07-15 17:22:21.261524] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.611 [2024-07-15 17:22:21.261564] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.611 [2024-07-15 17:22:21.268408] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.611 [2024-07-15 17:22:21.268610] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:10.611 [2024-07-15 17:22:21.284392] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.611 1 00:16:10.611 17:22:21 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.611 17:22:21 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:11.546 17:22:22 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=90404 00:16:11.546 17:22:22 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:11.546 17:22:22 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:11.804 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.804 fio-3.35 00:16:11.804 Starting 1 process 00:16:17.143 17:22:27 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 90365 00:16:17.143 17:22:27 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:22.397 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 90365 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:22.397 17:22:32 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=90510 00:16:22.397 17:22:32 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:22.397 17:22:32 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.398 17:22:32 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 90510 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 90510 ']' 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.398 17:22:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.398 [2024-07-15 17:22:32.418129] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:22.398 [2024-07-15 17:22:32.418302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90510 ] 00:16:22.398 [2024-07-15 17:22:32.571274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:22.398 [2024-07-15 17:22:32.594564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.398 [2024-07-15 17:22:32.673262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.398 [2024-07-15 17:22:32.673305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:22.655 17:22:33 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.655 [2024-07-15 17:22:33.391390] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:22.655 [2024-07-15 17:22:33.393426] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.655 17:22:33 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.655 malloc0 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.655 17:22:33 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.655 [2024-07-15 17:22:33.446568] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:22.655 [2024-07-15 17:22:33.446624] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:22.655 [2024-07-15 17:22:33.446637] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:22.655 [2024-07-15 17:22:33.456433] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:22.655 [2024-07-15 17:22:33.456461] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:22.655 [2024-07-15 17:22:33.456581] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:22.655 1 00:16:22.655 17:22:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.655 17:22:33 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 90404 00:16:49.182 [2024-07-15 17:22:57.605440] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:49.182 [2024-07-15 17:22:57.613225] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:49.182 [2024-07-15 17:22:57.626812] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:49.182 [2024-07-15 17:22:57.626845] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:15.722 00:17:15.722 fio_test: (groupid=0, jobs=1): err= 0: pid=90407: Mon Jul 15 17:23:22 2024 00:17:15.722 read: IOPS=9560, BW=37.3MiB/s (39.2MB/s)(2241MiB/60002msec) 00:17:15.722 slat (nsec): min=1879, max=318156, avg=6773.53, stdev=3013.91 00:17:15.722 clat (usec): min=1097, max=30339k, avg=6898.47, stdev=332652.60 00:17:15.722 lat (usec): min=1120, max=30339k, avg=6905.24, stdev=332652.59 00:17:15.722 clat percentiles (msec): 00:17:15.722 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:15.722 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:15.722 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:15.723 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 13], 00:17:15.723 | 99.99th=[17113] 00:17:15.723 bw ( KiB/s): min=34304, max=85152, per=100.00%, avg=76662.24, stdev=9123.50, samples=59 00:17:15.723 iops : min= 8576, max=21288, avg=19165.53, stdev=2280.87, samples=59 00:17:15.723 write: IOPS=9547, BW=37.3MiB/s (39.1MB/s)(2238MiB/60002msec); 0 zone resets 00:17:15.723 slat (usec): min=2, max=218, avg= 6.81, stdev= 2.97 00:17:15.723 clat (usec): min=937, max=30339k, avg=6482.75, stdev=307820.29 00:17:15.723 lat (usec): min=943, max=30339k, avg=6489.56, stdev=307820.28 00:17:15.723 clat percentiles (msec): 00:17:15.723 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:17:15.723 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:15.723 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:15.723 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 11], 00:17:15.723 | 99.99th=[17113] 00:17:15.723 bw ( KiB/s): min=34016, max=85608, per=100.00%, avg=76543.93, stdev=9175.63, samples=59 00:17:15.723 iops : min= 8504, max=21402, avg=19135.97, stdev=2293.90, samples=59 00:17:15.723 lat (usec) : 1000=0.01% 00:17:15.723 lat (msec) : 2=0.05%, 4=94.14%, 10=5.75%, 20=0.05%, >=2000=0.01% 00:17:15.723 cpu : usr=5.33%, sys=12.32%, ctx=37321, majf=0, minf=13 00:17:15.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:15.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.723 issued rwts: total=573677,572879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.723 00:17:15.723 Run status group 0 (all jobs): 00:17:15.723 READ: bw=37.3MiB/s (39.2MB/s), 37.3MiB/s-37.3MiB/s (39.2MB/s-39.2MB/s), io=2241MiB (2350MB), run=60002-60002msec 00:17:15.723 WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=2238MiB (2347MB), run=60002-60002msec 00:17:15.723 00:17:15.723 Disk stats (read/write): 00:17:15.723 ublkb1: ios=571593/570711, merge=0/0, ticks=3897146/3585319, in_queue=7482465, util=99.95% 00:17:15.723 17:23:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.723 [2024-07-15 17:23:22.560092] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:15.723 [2024-07-15 17:23:22.595464] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:15.723 [2024-07-15 17:23:22.595809] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:15.723 [2024-07-15 17:23:22.603418] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:15.723 [2024-07-15 17:23:22.603611] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:15.723 [2024-07-15 17:23:22.603634] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.723 17:23:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.723 [2024-07-15 17:23:22.619539] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.723 [2024-07-15 17:23:22.621643] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:15.723 [2024-07-15 17:23:22.621724] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.723 17:23:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:15.723 17:23:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:15.723 17:23:22 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 90510 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 90510 ']' 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 90510 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90510 00:17:15.723 killing process with pid 90510 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90510' 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@967 -- # kill 90510 00:17:15.723 17:23:22 ublk_recovery -- common/autotest_common.sh@972 -- # wait 90510 00:17:15.723 [2024-07-15 17:23:22.822859] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.723 [2024-07-15 17:23:22.822952] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:15.723 ************************************ 00:17:15.723 END TEST ublk_recovery 00:17:15.723 ************************************ 00:17:15.723 00:17:15.723 real 1m3.114s 00:17:15.723 user 1m48.390s 00:17:15.723 sys 0m18.084s 00:17:15.723 17:23:23 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.723 17:23:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.723 17:23:23 -- common/autotest_common.sh@1142 -- # return 0 00:17:15.723 17:23:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:15.723 17:23:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.723 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:17:15.723 17:23:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:17:15.723 17:23:23 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.723 17:23:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:15.723 17:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.723 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:17:15.723 ************************************ 00:17:15.723 START TEST ftl 00:17:15.723 ************************************ 00:17:15.723 17:23:23 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.723 * Looking for test storage... 00:17:15.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.723 17:23:23 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.723 17:23:23 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:15.723 17:23:23 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.723 17:23:23 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.723 17:23:23 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.723 17:23:23 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.723 17:23:23 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.723 17:23:23 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.723 17:23:23 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.723 17:23:23 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.723 17:23:23 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.723 17:23:23 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.723 17:23:23 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.723 17:23:23 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.723 17:23:23 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.723 17:23:23 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.723 17:23:23 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.724 17:23:23 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.724 17:23:23 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.724 17:23:23 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.724 17:23:23 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.724 17:23:23 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.724 17:23:23 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.724 17:23:23 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:15.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:15.724 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.724 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.724 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.724 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=91283 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:15.724 17:23:23 ftl -- ftl/ftl.sh@38 -- # waitforlisten 91283 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@829 -- # '[' -z 91283 ']' 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.724 17:23:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:15.724 [2024-07-15 17:23:23.965567] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:15.724 [2024-07-15 17:23:23.965745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91283 ] 00:17:15.724 [2024-07-15 17:23:24.117503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.724 [2024-07-15 17:23:24.139749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.724 [2024-07-15 17:23:24.231143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.724 17:23:24 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.724 17:23:24 ftl -- common/autotest_common.sh@862 -- # return 0 00:17:15.724 17:23:24 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:15.724 17:23:25 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:15.724 17:23:25 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:15.724 17:23:25 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@50 -- # break 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:15.724 17:23:26 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:15.983 17:23:26 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:15.983 17:23:26 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:15.983 17:23:26 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:15.983 17:23:26 ftl -- ftl/ftl.sh@63 -- # break 00:17:15.983 17:23:26 ftl -- ftl/ftl.sh@66 -- # killprocess 91283 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@948 -- # '[' -z 91283 ']' 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@952 -- # kill -0 91283 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@953 -- # uname 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91283 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.983 killing process with pid 91283 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91283' 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@967 -- # kill 91283 00:17:15.983 17:23:26 ftl -- common/autotest_common.sh@972 -- # wait 91283 00:17:16.548 17:23:27 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:16.548 17:23:27 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:16.548 17:23:27 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:16.548 17:23:27 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.548 17:23:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:16.548 ************************************ 00:17:16.548 START TEST ftl_fio_basic 00:17:16.548 ************************************ 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:16.548 * Looking for test storage... 00:17:16.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:16.548 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=91396 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 91396 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 91396 ']' 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.549 17:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:16.807 [2024-07-15 17:23:27.462113] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:16.807 [2024-07-15 17:23:27.462305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91396 ] 00:17:16.807 [2024-07-15 17:23:27.614567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.807 [2024-07-15 17:23:27.634256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.065 [2024-07-15 17:23:27.736795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.065 [2024-07-15 17:23:27.736926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.065 [2024-07-15 17:23:27.736995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:17.631 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:17.889 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:18.146 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:18.146 { 00:17:18.146 "name": "nvme0n1", 00:17:18.146 "aliases": [ 00:17:18.146 "4b0a4ec9-517f-4947-bf94-b1f1ffe22ee4" 00:17:18.146 ], 00:17:18.146 "product_name": "NVMe disk", 00:17:18.146 "block_size": 4096, 00:17:18.146 "num_blocks": 1310720, 00:17:18.146 "uuid": "4b0a4ec9-517f-4947-bf94-b1f1ffe22ee4", 00:17:18.146 "assigned_rate_limits": { 00:17:18.146 "rw_ios_per_sec": 0, 00:17:18.146 "rw_mbytes_per_sec": 0, 00:17:18.146 "r_mbytes_per_sec": 0, 00:17:18.146 "w_mbytes_per_sec": 0 00:17:18.146 }, 00:17:18.146 "claimed": false, 00:17:18.146 "zoned": false, 00:17:18.146 "supported_io_types": { 00:17:18.146 "read": true, 00:17:18.146 "write": true, 00:17:18.146 "unmap": true, 00:17:18.146 "flush": true, 00:17:18.146 "reset": true, 00:17:18.146 "nvme_admin": true, 00:17:18.146 "nvme_io": true, 00:17:18.146 "nvme_io_md": false, 00:17:18.146 "write_zeroes": true, 00:17:18.146 "zcopy": false, 00:17:18.146 "get_zone_info": false, 00:17:18.146 "zone_management": false, 00:17:18.146 "zone_append": false, 00:17:18.146 "compare": true, 00:17:18.146 "compare_and_write": false, 00:17:18.146 "abort": true, 00:17:18.146 "seek_hole": false, 00:17:18.146 "seek_data": false, 00:17:18.146 "copy": true, 00:17:18.146 "nvme_iov_md": false 00:17:18.146 }, 00:17:18.146 "driver_specific": { 00:17:18.146 "nvme": [ 00:17:18.146 { 00:17:18.146 "pci_address": "0000:00:11.0", 00:17:18.146 "trid": { 00:17:18.146 "trtype": "PCIe", 00:17:18.146 "traddr": "0000:00:11.0" 00:17:18.146 }, 00:17:18.147 "ctrlr_data": { 00:17:18.147 "cntlid": 0, 00:17:18.147 "vendor_id": "0x1b36", 00:17:18.147 "model_number": "QEMU NVMe Ctrl", 00:17:18.147 "serial_number": "12341", 00:17:18.147 "firmware_revision": "8.0.0", 00:17:18.147 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:18.147 "oacs": { 00:17:18.147 "security": 0, 00:17:18.147 "format": 1, 00:17:18.147 "firmware": 0, 00:17:18.147 "ns_manage": 1 00:17:18.147 }, 00:17:18.147 "multi_ctrlr": false, 00:17:18.147 "ana_reporting": false 00:17:18.147 }, 00:17:18.147 "vs": { 00:17:18.147 "nvme_version": "1.4" 00:17:18.147 }, 00:17:18.147 "ns_data": { 00:17:18.147 "id": 1, 00:17:18.147 "can_share": false 00:17:18.147 } 00:17:18.147 } 00:17:18.147 ], 00:17:18.147 "mp_policy": "active_passive" 00:17:18.147 } 00:17:18.147 } 00:17:18.147 ]' 00:17:18.147 17:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:18.404 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:18.660 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:18.660 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:18.917 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=295424f7-b374-4b85-8dfb-f9c1dfa409f9 00:17:18.917 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 295424f7-b374-4b85-8dfb-f9c1dfa409f9 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:19.205 17:23:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:19.462 { 00:17:19.462 "name": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:19.462 "aliases": [ 00:17:19.462 "lvs/nvme0n1p0" 00:17:19.462 ], 00:17:19.462 "product_name": "Logical Volume", 00:17:19.462 "block_size": 4096, 00:17:19.462 "num_blocks": 26476544, 00:17:19.462 "uuid": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:19.462 "assigned_rate_limits": { 00:17:19.462 "rw_ios_per_sec": 0, 00:17:19.462 "rw_mbytes_per_sec": 0, 00:17:19.462 "r_mbytes_per_sec": 0, 00:17:19.462 "w_mbytes_per_sec": 0 00:17:19.462 }, 00:17:19.462 "claimed": false, 00:17:19.462 "zoned": false, 00:17:19.462 "supported_io_types": { 00:17:19.462 "read": true, 00:17:19.462 "write": true, 00:17:19.462 "unmap": true, 00:17:19.462 "flush": false, 00:17:19.462 "reset": true, 00:17:19.462 "nvme_admin": false, 00:17:19.462 "nvme_io": false, 00:17:19.462 "nvme_io_md": false, 00:17:19.462 "write_zeroes": true, 00:17:19.462 "zcopy": false, 00:17:19.462 "get_zone_info": false, 00:17:19.462 "zone_management": false, 00:17:19.462 "zone_append": false, 00:17:19.462 "compare": false, 00:17:19.462 "compare_and_write": false, 00:17:19.462 "abort": false, 00:17:19.462 "seek_hole": true, 00:17:19.462 "seek_data": true, 00:17:19.462 "copy": false, 00:17:19.462 "nvme_iov_md": false 00:17:19.462 }, 00:17:19.462 "driver_specific": { 00:17:19.462 "lvol": { 00:17:19.462 "lvol_store_uuid": "295424f7-b374-4b85-8dfb-f9c1dfa409f9", 00:17:19.462 "base_bdev": "nvme0n1", 00:17:19.462 "thin_provision": true, 00:17:19.462 "num_allocated_clusters": 0, 00:17:19.462 "snapshot": false, 00:17:19.462 "clone": false, 00:17:19.462 "esnap_clone": false 00:17:19.462 } 00:17:19.462 } 00:17:19.462 } 00:17:19.462 ]' 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:19.462 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:19.463 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:19.463 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.027 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:20.027 { 00:17:20.027 "name": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:20.027 "aliases": [ 00:17:20.027 "lvs/nvme0n1p0" 00:17:20.027 ], 00:17:20.027 "product_name": "Logical Volume", 00:17:20.027 "block_size": 4096, 00:17:20.027 "num_blocks": 26476544, 00:17:20.027 "uuid": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:20.027 "assigned_rate_limits": { 00:17:20.027 "rw_ios_per_sec": 0, 00:17:20.027 "rw_mbytes_per_sec": 0, 00:17:20.027 "r_mbytes_per_sec": 0, 00:17:20.027 "w_mbytes_per_sec": 0 00:17:20.028 }, 00:17:20.028 "claimed": false, 00:17:20.028 "zoned": false, 00:17:20.028 "supported_io_types": { 00:17:20.028 "read": true, 00:17:20.028 "write": true, 00:17:20.028 "unmap": true, 00:17:20.028 "flush": false, 00:17:20.028 "reset": true, 00:17:20.028 "nvme_admin": false, 00:17:20.028 "nvme_io": false, 00:17:20.028 "nvme_io_md": false, 00:17:20.028 "write_zeroes": true, 00:17:20.028 "zcopy": false, 00:17:20.028 "get_zone_info": false, 00:17:20.028 "zone_management": false, 00:17:20.028 "zone_append": false, 00:17:20.028 "compare": false, 00:17:20.028 "compare_and_write": false, 00:17:20.028 "abort": false, 00:17:20.028 "seek_hole": true, 00:17:20.028 "seek_data": true, 00:17:20.028 "copy": false, 00:17:20.028 "nvme_iov_md": false 00:17:20.028 }, 00:17:20.028 "driver_specific": { 00:17:20.028 "lvol": { 00:17:20.028 "lvol_store_uuid": "295424f7-b374-4b85-8dfb-f9c1dfa409f9", 00:17:20.028 "base_bdev": "nvme0n1", 00:17:20.028 "thin_provision": true, 00:17:20.028 "num_allocated_clusters": 0, 00:17:20.028 "snapshot": false, 00:17:20.028 "clone": false, 00:17:20.028 "esnap_clone": false 00:17:20.028 } 00:17:20.028 } 00:17:20.028 } 00:17:20.028 ]' 00:17:20.028 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:20.284 17:23:30 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:20.541 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:20.542 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:20.542 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 00:17:20.799 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:20.799 { 00:17:20.799 "name": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:20.799 "aliases": [ 00:17:20.799 "lvs/nvme0n1p0" 00:17:20.799 ], 00:17:20.799 "product_name": "Logical Volume", 00:17:20.799 "block_size": 4096, 00:17:20.799 "num_blocks": 26476544, 00:17:20.799 "uuid": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:20.799 "assigned_rate_limits": { 00:17:20.799 "rw_ios_per_sec": 0, 00:17:20.799 "rw_mbytes_per_sec": 0, 00:17:20.799 "r_mbytes_per_sec": 0, 00:17:20.799 "w_mbytes_per_sec": 0 00:17:20.799 }, 00:17:20.799 "claimed": false, 00:17:20.799 "zoned": false, 00:17:20.799 "supported_io_types": { 00:17:20.799 "read": true, 00:17:20.799 "write": true, 00:17:20.799 "unmap": true, 00:17:20.799 "flush": false, 00:17:20.799 "reset": true, 00:17:20.799 "nvme_admin": false, 00:17:20.799 "nvme_io": false, 00:17:20.799 "nvme_io_md": false, 00:17:20.799 "write_zeroes": true, 00:17:20.799 "zcopy": false, 00:17:20.799 "get_zone_info": false, 00:17:20.799 "zone_management": false, 00:17:20.799 "zone_append": false, 00:17:20.799 "compare": false, 00:17:20.799 "compare_and_write": false, 00:17:20.799 "abort": false, 00:17:20.799 "seek_hole": true, 00:17:20.799 "seek_data": true, 00:17:20.799 "copy": false, 00:17:20.799 "nvme_iov_md": false 00:17:20.799 }, 00:17:20.799 "driver_specific": { 00:17:20.799 "lvol": { 00:17:20.799 "lvol_store_uuid": "295424f7-b374-4b85-8dfb-f9c1dfa409f9", 00:17:20.799 "base_bdev": "nvme0n1", 00:17:20.799 "thin_provision": true, 00:17:20.799 "num_allocated_clusters": 0, 00:17:20.799 "snapshot": false, 00:17:20.799 "clone": false, 00:17:20.799 "esnap_clone": false 00:17:20.799 } 00:17:20.799 } 00:17:20.799 } 00:17:20.799 ]' 00:17:20.799 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:20.799 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:20.799 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:21.057 17:23:31 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5 -c nvc0n1p0 --l2p_dram_limit 60 00:17:21.057 [2024-07-15 17:23:31.898177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.898253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.057 [2024-07-15 17:23:31.898276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:21.057 [2024-07-15 17:23:31.898291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.898428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.898454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.057 [2024-07-15 17:23:31.898489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:21.057 [2024-07-15 17:23:31.898524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.898591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.057 [2024-07-15 17:23:31.898998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.057 [2024-07-15 17:23:31.899035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.899053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.057 [2024-07-15 17:23:31.899084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:21.057 [2024-07-15 17:23:31.899100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.899266] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2654eda0-e853-4468-be6d-0dddadf2ef59 00:17:21.057 [2024-07-15 17:23:31.901411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.901577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:21.057 [2024-07-15 17:23:31.901711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:21.057 [2024-07-15 17:23:31.901875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.911870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.912109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.057 [2024-07-15 17:23:31.912286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.773 ms 00:17:21.057 [2024-07-15 17:23:31.912341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.912639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.057 [2024-07-15 17:23:31.912795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.057 [2024-07-15 17:23:31.912933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:21.057 [2024-07-15 17:23:31.912986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.057 [2024-07-15 17:23:31.913134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.058 [2024-07-15 17:23:31.913218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.058 [2024-07-15 17:23:31.913312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:21.058 [2024-07-15 17:23:31.913371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.058 [2024-07-15 17:23:31.913470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.316 [2024-07-15 17:23:31.915854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.316 [2024-07-15 17:23:31.916021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.316 [2024-07-15 17:23:31.916184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:17:21.316 [2024-07-15 17:23:31.916214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.316 [2024-07-15 17:23:31.916308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.316 [2024-07-15 17:23:31.916338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.316 [2024-07-15 17:23:31.916395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:21.316 [2024-07-15 17:23:31.916419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.316 [2024-07-15 17:23:31.916472] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:21.316 [2024-07-15 17:23:31.916660] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.316 [2024-07-15 17:23:31.916684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.316 [2024-07-15 17:23:31.916720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:21.316 [2024-07-15 17:23:31.916747] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.316 [2024-07-15 17:23:31.916764] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.316 [2024-07-15 17:23:31.916778] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:21.316 [2024-07-15 17:23:31.916792] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.316 [2024-07-15 17:23:31.916820] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.316 [2024-07-15 17:23:31.916835] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.316 [2024-07-15 17:23:31.916849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.316 [2024-07-15 17:23:31.916863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.316 [2024-07-15 17:23:31.916876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:17:21.316 [2024-07-15 17:23:31.916905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.316 [2024-07-15 17:23:31.917024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.316 [2024-07-15 17:23:31.917051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.316 [2024-07-15 17:23:31.917083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:21.316 [2024-07-15 17:23:31.917099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.316 [2024-07-15 17:23:31.917288] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.316 [2024-07-15 17:23:31.917317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.316 [2024-07-15 17:23:31.917331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.316 [2024-07-15 17:23:31.917391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.316 [2024-07-15 17:23:31.917429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.316 [2024-07-15 17:23:31.917453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.316 [2024-07-15 17:23:31.917466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:21.316 [2024-07-15 17:23:31.917477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.316 [2024-07-15 17:23:31.917494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.316 [2024-07-15 17:23:31.917505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:21.316 [2024-07-15 17:23:31.917519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.316 [2024-07-15 17:23:31.917543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917554] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.316 [2024-07-15 17:23:31.917578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.316 [2024-07-15 17:23:31.917621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.316 [2024-07-15 17:23:31.917658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.316 [2024-07-15 17:23:31.917701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:21.316 [2024-07-15 17:23:31.917711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.316 [2024-07-15 17:23:31.917746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.317 [2024-07-15 17:23:31.917758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:21.317 [2024-07-15 17:23:31.917772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.317 [2024-07-15 17:23:31.917783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.317 [2024-07-15 17:23:31.917796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:21.317 [2024-07-15 17:23:31.917807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.317 [2024-07-15 17:23:31.917821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.317 [2024-07-15 17:23:31.917833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:21.317 [2024-07-15 17:23:31.917845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.317 [2024-07-15 17:23:31.917857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.317 [2024-07-15 17:23:31.917870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:21.317 [2024-07-15 17:23:31.917881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.317 [2024-07-15 17:23:31.917894] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.317 [2024-07-15 17:23:31.917906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.317 [2024-07-15 17:23:31.917927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.317 [2024-07-15 17:23:31.917939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.317 [2024-07-15 17:23:31.917953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.317 [2024-07-15 17:23:31.917965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.317 [2024-07-15 17:23:31.917981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.317 [2024-07-15 17:23:31.917993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.317 [2024-07-15 17:23:31.918006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.317 [2024-07-15 17:23:31.918018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.317 [2024-07-15 17:23:31.918036] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.317 [2024-07-15 17:23:31.918053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:21.317 [2024-07-15 17:23:31.918088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:21.317 [2024-07-15 17:23:31.918102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:21.317 [2024-07-15 17:23:31.918116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:21.317 [2024-07-15 17:23:31.918130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:21.317 [2024-07-15 17:23:31.918143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:21.317 [2024-07-15 17:23:31.918159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:21.317 [2024-07-15 17:23:31.918172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:21.317 [2024-07-15 17:23:31.918186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:21.317 [2024-07-15 17:23:31.918199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:21.317 [2024-07-15 17:23:31.918267] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.317 [2024-07-15 17:23:31.918280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.317 [2024-07-15 17:23:31.918308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.317 [2024-07-15 17:23:31.918323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.317 [2024-07-15 17:23:31.918335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.317 [2024-07-15 17:23:31.918353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.317 [2024-07-15 17:23:31.918388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.317 [2024-07-15 17:23:31.918425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:17:21.317 [2024-07-15 17:23:31.918438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.317 [2024-07-15 17:23:31.918557] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:21.317 [2024-07-15 17:23:31.918576] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:24.594 [2024-07-15 17:23:34.861268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.861382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:24.594 [2024-07-15 17:23:34.861413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2942.702 ms 00:17:24.594 [2024-07-15 17:23:34.861427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.877121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.877190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.594 [2024-07-15 17:23:34.877226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.532 ms 00:17:24.594 [2024-07-15 17:23:34.877240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.877443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.877465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:24.594 [2024-07-15 17:23:34.877482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:24.594 [2024-07-15 17:23:34.877495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.901118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.901189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.594 [2024-07-15 17:23:34.901215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.517 ms 00:17:24.594 [2024-07-15 17:23:34.901259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.901341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.901403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.594 [2024-07-15 17:23:34.901432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:24.594 [2024-07-15 17:23:34.901445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.902110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.902147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.594 [2024-07-15 17:23:34.902170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:24.594 [2024-07-15 17:23:34.902200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.902417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.902449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.594 [2024-07-15 17:23:34.902470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:24.594 [2024-07-15 17:23:34.902482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.912324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.912409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.594 [2024-07-15 17:23:34.912436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.786 ms 00:17:24.594 [2024-07-15 17:23:34.912450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.923017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:24.594 [2024-07-15 17:23:34.945221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.945345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:24.594 [2024-07-15 17:23:34.945386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.564 ms 00:17:24.594 [2024-07-15 17:23:34.945411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.997303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.997413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:24.594 [2024-07-15 17:23:34.997440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.808 ms 00:17:24.594 [2024-07-15 17:23:34.997460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:34.997739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:34.997792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:24.594 [2024-07-15 17:23:34.997807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:24.594 [2024-07-15 17:23:34.997822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.001604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.001656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:24.594 [2024-07-15 17:23:35.001676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:17:24.594 [2024-07-15 17:23:35.001691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.004932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.004982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:24.594 [2024-07-15 17:23:35.005002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:17:24.594 [2024-07-15 17:23:35.005017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.005537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.005579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:24.594 [2024-07-15 17:23:35.005611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:17:24.594 [2024-07-15 17:23:35.005630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.038225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.038315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:24.594 [2024-07-15 17:23:35.038338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.539 ms 00:17:24.594 [2024-07-15 17:23:35.038355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.043532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.043589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:24.594 [2024-07-15 17:23:35.043608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.091 ms 00:17:24.594 [2024-07-15 17:23:35.043636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.047252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.047303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:24.594 [2024-07-15 17:23:35.047320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:17:24.594 [2024-07-15 17:23:35.047334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.051461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.051512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:24.594 [2024-07-15 17:23:35.051532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.045 ms 00:17:24.594 [2024-07-15 17:23:35.051550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.594 [2024-07-15 17:23:35.051632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.594 [2024-07-15 17:23:35.051660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:24.595 [2024-07-15 17:23:35.051674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:24.595 [2024-07-15 17:23:35.051689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.595 [2024-07-15 17:23:35.051822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.595 [2024-07-15 17:23:35.051848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:24.595 [2024-07-15 17:23:35.051862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:24.595 [2024-07-15 17:23:35.051876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.595 [2024-07-15 17:23:35.053432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3154.636 ms, result 0 00:17:24.595 { 00:17:24.595 "name": "ftl0", 00:17:24.595 "uuid": "2654eda0-e853-4468-be6d-0dddadf2ef59" 00:17:24.595 } 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:24.595 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:24.854 [ 00:17:24.854 { 00:17:24.854 "name": "ftl0", 00:17:24.854 "aliases": [ 00:17:24.854 "2654eda0-e853-4468-be6d-0dddadf2ef59" 00:17:24.854 ], 00:17:24.854 "product_name": "FTL disk", 00:17:24.854 "block_size": 4096, 00:17:24.854 "num_blocks": 20971520, 00:17:24.854 "uuid": "2654eda0-e853-4468-be6d-0dddadf2ef59", 00:17:24.854 "assigned_rate_limits": { 00:17:24.854 "rw_ios_per_sec": 0, 00:17:24.854 "rw_mbytes_per_sec": 0, 00:17:24.854 "r_mbytes_per_sec": 0, 00:17:24.854 "w_mbytes_per_sec": 0 00:17:24.854 }, 00:17:24.854 "claimed": false, 00:17:24.854 "zoned": false, 00:17:24.854 "supported_io_types": { 00:17:24.854 "read": true, 00:17:24.854 "write": true, 00:17:24.854 "unmap": true, 00:17:24.854 "flush": true, 00:17:24.854 "reset": false, 00:17:24.854 "nvme_admin": false, 00:17:24.854 "nvme_io": false, 00:17:24.854 "nvme_io_md": false, 00:17:24.854 "write_zeroes": true, 00:17:24.854 "zcopy": false, 00:17:24.854 "get_zone_info": false, 00:17:24.854 "zone_management": false, 00:17:24.854 "zone_append": false, 00:17:24.854 "compare": false, 00:17:24.854 "compare_and_write": false, 00:17:24.854 "abort": false, 00:17:24.854 "seek_hole": false, 00:17:24.854 "seek_data": false, 00:17:24.854 "copy": false, 00:17:24.854 "nvme_iov_md": false 00:17:24.854 }, 00:17:24.854 "driver_specific": { 00:17:24.854 "ftl": { 00:17:24.854 "base_bdev": "64bd0af2-cdef-4fca-bf0c-f9ecee1a8df5", 00:17:24.854 "cache": "nvc0n1p0" 00:17:24.854 } 00:17:24.854 } 00:17:24.854 } 00:17:24.854 ] 00:17:24.854 17:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:17:24.854 17:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:24.854 17:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:25.112 17:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:25.112 17:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:25.371 [2024-07-15 17:23:36.090478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.090553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:25.371 [2024-07-15 17:23:36.090581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.371 [2024-07-15 17:23:36.090596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.090665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.371 [2024-07-15 17:23:36.091602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.091653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:25.371 [2024-07-15 17:23:36.091670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:17:25.371 [2024-07-15 17:23:36.091685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.092346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.092384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:25.371 [2024-07-15 17:23:36.092404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:17:25.371 [2024-07-15 17:23:36.092418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.095621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.095677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:25.371 [2024-07-15 17:23:36.095707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:17:25.371 [2024-07-15 17:23:36.095726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.102533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.102582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:25.371 [2024-07-15 17:23:36.102600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:17:25.371 [2024-07-15 17:23:36.102615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.104124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.104177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:25.371 [2024-07-15 17:23:36.104212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:17:25.371 [2024-07-15 17:23:36.104228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.108994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.109052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:25.371 [2024-07-15 17:23:36.109070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:17:25.371 [2024-07-15 17:23:36.109084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.371 [2024-07-15 17:23:36.109303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.371 [2024-07-15 17:23:36.109338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:25.372 [2024-07-15 17:23:36.109353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:25.372 [2024-07-15 17:23:36.109382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.372 [2024-07-15 17:23:36.111243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.372 [2024-07-15 17:23:36.111290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:25.372 [2024-07-15 17:23:36.111307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:17:25.372 [2024-07-15 17:23:36.111321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.372 [2024-07-15 17:23:36.112885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.372 [2024-07-15 17:23:36.112943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:25.372 [2024-07-15 17:23:36.112960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:17:25.372 [2024-07-15 17:23:36.112974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.372 [2024-07-15 17:23:36.114229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.372 [2024-07-15 17:23:36.114275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:25.372 [2024-07-15 17:23:36.114290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:17:25.372 [2024-07-15 17:23:36.114304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.372 [2024-07-15 17:23:36.115489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.372 [2024-07-15 17:23:36.115532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:25.372 [2024-07-15 17:23:36.115548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:17:25.372 [2024-07-15 17:23:36.115561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.372 [2024-07-15 17:23:36.115617] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:25.372 [2024-07-15 17:23:36.115663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.115986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:25.372 [2024-07-15 17:23:36.116795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.116992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:25.373 [2024-07-15 17:23:36.117193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:25.373 [2024-07-15 17:23:36.117209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2654eda0-e853-4468-be6d-0dddadf2ef59 00:17:25.373 [2024-07-15 17:23:36.117230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:25.373 [2024-07-15 17:23:36.117242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:25.373 [2024-07-15 17:23:36.117257] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:25.373 [2024-07-15 17:23:36.117269] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:25.373 [2024-07-15 17:23:36.117283] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:25.373 [2024-07-15 17:23:36.117304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:25.373 [2024-07-15 17:23:36.117318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:25.373 [2024-07-15 17:23:36.117329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:25.373 [2024-07-15 17:23:36.117342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:25.373 [2024-07-15 17:23:36.117354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.373 [2024-07-15 17:23:36.117720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:25.373 [2024-07-15 17:23:36.117769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:17:25.373 [2024-07-15 17:23:36.117810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.120386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.373 [2024-07-15 17:23:36.120542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:25.373 [2024-07-15 17:23:36.120662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:17:25.373 [2024-07-15 17:23:36.120718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.120938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.373 [2024-07-15 17:23:36.120998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:25.373 [2024-07-15 17:23:36.121100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:25.373 [2024-07-15 17:23:36.121154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.129772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.129962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.373 [2024-07-15 17:23:36.130114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.130175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.130300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.130387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.373 [2024-07-15 17:23:36.130514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.130573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.130788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.130867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.373 [2024-07-15 17:23:36.131010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.131071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.131168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.131231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.373 [2024-07-15 17:23:36.131277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.131421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.145844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.146138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.373 [2024-07-15 17:23:36.146168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.146185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.157051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.157127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.373 [2024-07-15 17:23:36.157164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.157186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.157335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.157386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.373 [2024-07-15 17:23:36.157404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.157420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.157539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.157563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.373 [2024-07-15 17:23:36.157595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.157622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.157770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.157795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.373 [2024-07-15 17:23:36.157809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.157824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.157899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.157923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:25.373 [2024-07-15 17:23:36.157936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.157950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.158025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.158047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.373 [2024-07-15 17:23:36.158060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.158074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.158151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.373 [2024-07-15 17:23:36.158174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.373 [2024-07-15 17:23:36.158187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.373 [2024-07-15 17:23:36.158221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.373 [2024-07-15 17:23:36.158472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.983 ms, result 0 00:17:25.373 true 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 91396 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 91396 ']' 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 91396 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91396 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91396' 00:17:25.373 killing process with pid 91396 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 91396 00:17:25.373 17:23:36 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 91396 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:28.653 17:23:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:28.911 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:28.911 fio-3.35 00:17:28.911 Starting 1 thread 00:17:34.205 00:17:34.205 test: (groupid=0, jobs=1): err= 0: pid=91577: Mon Jul 15 17:23:44 2024 00:17:34.205 read: IOPS=933, BW=62.0MiB/s (65.0MB/s)(255MiB/4107msec) 00:17:34.205 slat (nsec): min=5752, max=34532, avg=7527.13, stdev=2887.69 00:17:34.205 clat (usec): min=335, max=907, avg=475.14, stdev=55.31 00:17:34.205 lat (usec): min=342, max=914, avg=482.67, stdev=56.03 00:17:34.205 clat percentiles (usec): 00:17:34.205 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 412], 20.00th=[ 445], 00:17:34.205 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 469], 00:17:34.205 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 570], 00:17:34.205 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 791], 99.95th=[ 865], 00:17:34.205 | 99.99th=[ 906] 00:17:34.205 write: IOPS=939, BW=62.4MiB/s (65.4MB/s)(256MiB/4103msec); 0 zone resets 00:17:34.205 slat (usec): min=19, max=112, avg=24.69, stdev= 5.71 00:17:34.205 clat (usec): min=372, max=1030, avg=546.36, stdev=66.26 00:17:34.205 lat (usec): min=395, max=1093, avg=571.05, stdev=66.93 00:17:34.205 clat percentiles (usec): 00:17:34.205 | 1.00th=[ 424], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 486], 00:17:34.205 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:17:34.205 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:17:34.205 | 99.00th=[ 783], 99.50th=[ 865], 99.90th=[ 979], 99.95th=[ 1012], 00:17:34.205 | 99.99th=[ 1029] 00:17:34.205 bw ( KiB/s): min=63240, max=65144, per=100.00%, avg=63920.00, stdev=681.94, samples=8 00:17:34.205 iops : min= 930, max= 958, avg=940.00, stdev=10.03, samples=8 00:17:34.205 lat (usec) : 500=49.79%, 750=49.34%, 1000=0.85% 00:17:34.205 lat (msec) : 2=0.03% 00:17:34.205 cpu : usr=99.17%, sys=0.17%, ctx=12, majf=0, minf=1181 00:17:34.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:34.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.205 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:34.205 00:17:34.205 Run status group 0 (all jobs): 00:17:34.205 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=255MiB (267MB), run=4107-4107msec 00:17:34.205 WRITE: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=256MiB (269MB), run=4103-4103msec 00:17:34.463 ----------------------------------------------------- 00:17:34.463 Suppressions used: 00:17:34.463 count bytes template 00:17:34.463 1 5 /usr/src/fio/parse.c 00:17:34.463 1 8 libtcmalloc_minimal.so 00:17:34.463 1 904 libcrypto.so 00:17:34.463 ----------------------------------------------------- 00:17:34.463 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:34.463 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:34.464 17:23:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:34.722 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:34.722 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:34.722 fio-3.35 00:17:34.722 Starting 2 threads 00:18:06.778 00:18:06.778 first_half: (groupid=0, jobs=1): err= 0: pid=91663: Mon Jul 15 17:24:14 2024 00:18:06.778 read: IOPS=2329, BW=9316KiB/s (9540kB/s)(256MiB/28109msec) 00:18:06.778 slat (nsec): min=4655, max=46856, avg=8137.18, stdev=2413.79 00:18:06.778 clat (usec): min=784, max=408483, avg=46274.05, stdev=29721.47 00:18:06.778 lat (usec): min=789, max=408492, avg=46282.19, stdev=29721.69 00:18:06.778 clat percentiles (msec): 00:18:06.778 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:18:06.778 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 41], 00:18:06.778 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 52], 95.00th=[ 89], 00:18:06.778 | 99.00th=[ 197], 99.50th=[ 215], 99.90th=[ 284], 99.95th=[ 338], 00:18:06.778 | 99.99th=[ 393] 00:18:06.778 write: IOPS=2335, BW=9341KiB/s (9565kB/s)(256MiB/28063msec); 0 zone resets 00:18:06.778 slat (usec): min=5, max=469, avg= 9.27, stdev= 5.25 00:18:06.778 clat (usec): min=473, max=52509, avg=8634.39, stdev=8947.72 00:18:06.778 lat (usec): min=491, max=52521, avg=8643.66, stdev=8947.95 00:18:06.778 clat percentiles (usec): 00:18:06.778 | 1.00th=[ 1106], 5.00th=[ 1500], 10.00th=[ 1762], 20.00th=[ 3032], 00:18:06.778 | 30.00th=[ 4080], 40.00th=[ 5211], 50.00th=[ 6063], 60.00th=[ 7177], 00:18:06.778 | 70.00th=[ 8356], 80.00th=[11338], 90.00th=[16450], 95.00th=[31589], 00:18:06.778 | 99.00th=[45876], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:18:06.778 | 99.99th=[52167] 00:18:06.778 bw ( KiB/s): min= 7416, max=48304, per=100.00%, avg=24801.14, stdev=11488.35, samples=21 00:18:06.778 iops : min= 1854, max=12076, avg=6200.29, stdev=2872.09, samples=21 00:18:06.778 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.22% 00:18:06.778 lat (msec) : 2=6.73%, 4=7.48%, 10=24.39%, 20=9.07%, 50=45.99% 00:18:06.778 lat (msec) : 100=3.77%, 250=2.20%, 500=0.09% 00:18:06.778 cpu : usr=99.24%, sys=0.12%, ctx=53, majf=0, minf=5545 00:18:06.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:06.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.778 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.778 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.778 second_half: (groupid=0, jobs=1): err= 0: pid=91664: Mon Jul 15 17:24:14 2024 00:18:06.778 read: IOPS=2349, BW=9396KiB/s (9622kB/s)(256MiB/27879msec) 00:18:06.778 slat (nsec): min=4625, max=93149, avg=8368.98, stdev=2541.49 00:18:06.778 clat (msec): min=13, max=234, avg=46.73, stdev=25.63 00:18:06.778 lat (msec): min=13, max=234, avg=46.73, stdev=25.63 00:18:06.778 clat percentiles (msec): 00:18:06.778 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 38], 00:18:06.778 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 42], 00:18:06.778 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 84], 00:18:06.778 | 99.00th=[ 188], 99.50th=[ 207], 99.90th=[ 224], 99.95th=[ 224], 00:18:06.778 | 99.99th=[ 232] 00:18:06.778 write: IOPS=2363, BW=9453KiB/s (9680kB/s)(256MiB/27730msec); 0 zone resets 00:18:06.778 slat (usec): min=5, max=385, avg= 9.15, stdev= 4.75 00:18:06.778 clat (usec): min=484, max=47027, avg=7733.50, stdev=5627.88 00:18:06.778 lat (usec): min=490, max=47035, avg=7742.66, stdev=5628.18 00:18:06.778 clat percentiles (usec): 00:18:06.778 | 1.00th=[ 1270], 5.00th=[ 2040], 10.00th=[ 3097], 20.00th=[ 3982], 00:18:06.778 | 30.00th=[ 4883], 40.00th=[ 5669], 50.00th=[ 6259], 60.00th=[ 7111], 00:18:06.778 | 70.00th=[ 7701], 80.00th=[ 9372], 90.00th=[15401], 95.00th=[17695], 00:18:06.778 | 99.00th=[30278], 99.50th=[40109], 99.90th=[44303], 99.95th=[45351], 00:18:06.778 | 99.99th=[46400] 00:18:06.778 bw ( KiB/s): min= 192, max=39024, per=100.00%, avg=20824.24, stdev=12579.58, samples=25 00:18:06.778 iops : min= 48, max= 9756, avg=5206.04, stdev=3144.87, samples=25 00:18:06.778 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.14% 00:18:06.778 lat (msec) : 2=2.22%, 4=7.67%, 10=30.50%, 20=7.92%, 50=44.75% 00:18:06.778 lat (msec) : 100=4.68%, 250=2.07% 00:18:06.778 cpu : usr=99.11%, sys=0.15%, ctx=45, majf=0, minf=5595 00:18:06.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:06.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.778 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.778 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.778 00:18:06.778 Run status group 0 (all jobs): 00:18:06.778 READ: bw=18.2MiB/s (19.1MB/s), 9316KiB/s-9396KiB/s (9540kB/s-9622kB/s), io=512MiB (536MB), run=27879-28109msec 00:18:06.778 WRITE: bw=18.2MiB/s (19.1MB/s), 9341KiB/s-9453KiB/s (9565kB/s-9680kB/s), io=512MiB (537MB), run=27730-28063msec 00:18:06.778 ----------------------------------------------------- 00:18:06.778 Suppressions used: 00:18:06.778 count bytes template 00:18:06.778 2 10 /usr/src/fio/parse.c 00:18:06.778 3 288 /usr/src/fio/iolog.c 00:18:06.778 1 8 libtcmalloc_minimal.so 00:18:06.778 1 904 libcrypto.so 00:18:06.778 ----------------------------------------------------- 00:18:06.778 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.778 17:24:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:06.779 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:06.779 fio-3.35 00:18:06.779 Starting 1 thread 00:18:24.875 00:18:24.875 test: (groupid=0, jobs=1): err= 0: pid=92005: Mon Jul 15 17:24:33 2024 00:18:24.875 read: IOPS=6309, BW=24.6MiB/s (25.8MB/s)(255MiB/10334msec) 00:18:24.875 slat (nsec): min=4767, max=64290, avg=7326.54, stdev=2579.42 00:18:24.875 clat (usec): min=914, max=38328, avg=20275.57, stdev=1612.93 00:18:24.875 lat (usec): min=919, max=38333, avg=20282.90, stdev=1613.40 00:18:24.875 clat percentiles (usec): 00:18:24.875 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:18:24.875 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:18:24.875 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21890], 95.00th=[23462], 00:18:24.875 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28967], 99.95th=[33817], 00:18:24.875 | 99.99th=[37487] 00:18:24.875 write: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(256MiB/6363msec); 0 zone resets 00:18:24.875 slat (usec): min=6, max=339, avg=11.49, stdev= 5.75 00:18:24.875 clat (usec): min=712, max=66446, avg=12360.30, stdev=14433.01 00:18:24.875 lat (usec): min=720, max=66456, avg=12371.80, stdev=14433.02 00:18:24.875 clat percentiles (usec): 00:18:24.875 | 1.00th=[ 1037], 5.00th=[ 1237], 10.00th=[ 1369], 20.00th=[ 1565], 00:18:24.875 | 30.00th=[ 1778], 40.00th=[ 2343], 50.00th=[ 8717], 60.00th=[10159], 00:18:24.875 | 70.00th=[12125], 80.00th=[15270], 90.00th=[42730], 95.00th=[44827], 00:18:24.875 | 99.00th=[47449], 99.50th=[48497], 99.90th=[55837], 99.95th=[61080], 00:18:24.875 | 99.99th=[64750] 00:18:24.875 bw ( KiB/s): min=27320, max=55392, per=97.89%, avg=40329.85, stdev=7054.54, samples=13 00:18:24.875 iops : min= 6830, max=13848, avg=10082.46, stdev=1763.64, samples=13 00:18:24.875 lat (usec) : 750=0.01%, 1000=0.32% 00:18:24.875 lat (msec) : 2=17.93%, 4=2.44%, 10=8.68%, 20=41.69%, 50=28.81% 00:18:24.875 lat (msec) : 100=0.12% 00:18:24.875 cpu : usr=98.98%, sys=0.20%, ctx=35, majf=0, minf=5577 00:18:24.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:24.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.875 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.875 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.875 00:18:24.875 Run status group 0 (all jobs): 00:18:24.875 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=255MiB (267MB), run=10334-10334msec 00:18:24.875 WRITE: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=256MiB (268MB), run=6363-6363msec 00:18:24.875 ----------------------------------------------------- 00:18:24.875 Suppressions used: 00:18:24.875 count bytes template 00:18:24.875 1 5 /usr/src/fio/parse.c 00:18:24.875 2 192 /usr/src/fio/iolog.c 00:18:24.875 1 8 libtcmalloc_minimal.so 00:18:24.875 1 904 libcrypto.so 00:18:24.875 ----------------------------------------------------- 00:18:24.875 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:24.875 Remove shared memory files 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid75629 /dev/shm/spdk_tgt_trace.pid90365 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:24.875 ************************************ 00:18:24.875 END TEST ftl_fio_basic 00:18:24.875 ************************************ 00:18:24.875 00:18:24.875 real 1m7.261s 00:18:24.875 user 2m33.384s 00:18:24.875 sys 0m3.919s 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:24.875 17:24:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.875 17:24:34 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:24.875 17:24:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:24.875 17:24:34 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:24.875 17:24:34 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.875 17:24:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:24.875 ************************************ 00:18:24.875 START TEST ftl_bdevperf 00:18:24.875 ************************************ 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:24.875 * Looking for test storage... 00:18:24.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=92260 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 92260 00:18:24.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 92260 ']' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.875 17:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.875 [2024-07-15 17:24:34.791272] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:18:24.875 [2024-07-15 17:24:34.791500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92260 ] 00:18:24.875 [2024-07-15 17:24:34.943725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:24.875 [2024-07-15 17:24:34.962042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.875 [2024-07-15 17:24:35.060293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:24.875 17:24:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:25.445 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:25.703 { 00:18:25.703 "name": "nvme0n1", 00:18:25.703 "aliases": [ 00:18:25.703 "cd4a85f4-ed33-48f3-ac2c-efef1f29b707" 00:18:25.703 ], 00:18:25.703 "product_name": "NVMe disk", 00:18:25.703 "block_size": 4096, 00:18:25.703 "num_blocks": 1310720, 00:18:25.703 "uuid": "cd4a85f4-ed33-48f3-ac2c-efef1f29b707", 00:18:25.703 "assigned_rate_limits": { 00:18:25.703 "rw_ios_per_sec": 0, 00:18:25.703 "rw_mbytes_per_sec": 0, 00:18:25.703 "r_mbytes_per_sec": 0, 00:18:25.703 "w_mbytes_per_sec": 0 00:18:25.703 }, 00:18:25.703 "claimed": true, 00:18:25.703 "claim_type": "read_many_write_one", 00:18:25.703 "zoned": false, 00:18:25.703 "supported_io_types": { 00:18:25.703 "read": true, 00:18:25.703 "write": true, 00:18:25.703 "unmap": true, 00:18:25.703 "flush": true, 00:18:25.703 "reset": true, 00:18:25.703 "nvme_admin": true, 00:18:25.703 "nvme_io": true, 00:18:25.703 "nvme_io_md": false, 00:18:25.703 "write_zeroes": true, 00:18:25.703 "zcopy": false, 00:18:25.703 "get_zone_info": false, 00:18:25.703 "zone_management": false, 00:18:25.703 "zone_append": false, 00:18:25.703 "compare": true, 00:18:25.703 "compare_and_write": false, 00:18:25.703 "abort": true, 00:18:25.703 "seek_hole": false, 00:18:25.703 "seek_data": false, 00:18:25.703 "copy": true, 00:18:25.703 "nvme_iov_md": false 00:18:25.703 }, 00:18:25.703 "driver_specific": { 00:18:25.703 "nvme": [ 00:18:25.703 { 00:18:25.703 "pci_address": "0000:00:11.0", 00:18:25.703 "trid": { 00:18:25.703 "trtype": "PCIe", 00:18:25.703 "traddr": "0000:00:11.0" 00:18:25.703 }, 00:18:25.703 "ctrlr_data": { 00:18:25.703 "cntlid": 0, 00:18:25.703 "vendor_id": "0x1b36", 00:18:25.703 "model_number": "QEMU NVMe Ctrl", 00:18:25.703 "serial_number": "12341", 00:18:25.703 "firmware_revision": "8.0.0", 00:18:25.703 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:25.703 "oacs": { 00:18:25.703 "security": 0, 00:18:25.703 "format": 1, 00:18:25.703 "firmware": 0, 00:18:25.703 "ns_manage": 1 00:18:25.703 }, 00:18:25.703 "multi_ctrlr": false, 00:18:25.703 "ana_reporting": false 00:18:25.703 }, 00:18:25.703 "vs": { 00:18:25.703 "nvme_version": "1.4" 00:18:25.703 }, 00:18:25.703 "ns_data": { 00:18:25.703 "id": 1, 00:18:25.703 "can_share": false 00:18:25.703 } 00:18:25.703 } 00:18:25.703 ], 00:18:25.703 "mp_policy": "active_passive" 00:18:25.703 } 00:18:25.703 } 00:18:25.703 ]' 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:25.703 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:25.960 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=295424f7-b374-4b85-8dfb-f9c1dfa409f9 00:18:25.960 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:25.960 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 295424f7-b374-4b85-8dfb-f9c1dfa409f9 00:18:26.218 17:24:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:26.476 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e 00:18:26.476 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:26.733 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2379228-b3d7-474a-9eb6-c34849b21760 00:18:26.990 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:26.990 { 00:18:26.990 "name": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:26.990 "aliases": [ 00:18:26.990 "lvs/nvme0n1p0" 00:18:26.990 ], 00:18:26.990 "product_name": "Logical Volume", 00:18:26.990 "block_size": 4096, 00:18:26.990 "num_blocks": 26476544, 00:18:26.990 "uuid": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:26.990 "assigned_rate_limits": { 00:18:26.990 "rw_ios_per_sec": 0, 00:18:26.990 "rw_mbytes_per_sec": 0, 00:18:26.990 "r_mbytes_per_sec": 0, 00:18:26.990 "w_mbytes_per_sec": 0 00:18:26.990 }, 00:18:26.990 "claimed": false, 00:18:26.990 "zoned": false, 00:18:26.990 "supported_io_types": { 00:18:26.990 "read": true, 00:18:26.990 "write": true, 00:18:26.990 "unmap": true, 00:18:26.990 "flush": false, 00:18:26.990 "reset": true, 00:18:26.990 "nvme_admin": false, 00:18:26.990 "nvme_io": false, 00:18:26.990 "nvme_io_md": false, 00:18:26.990 "write_zeroes": true, 00:18:26.990 "zcopy": false, 00:18:26.990 "get_zone_info": false, 00:18:26.990 "zone_management": false, 00:18:26.990 "zone_append": false, 00:18:26.990 "compare": false, 00:18:26.990 "compare_and_write": false, 00:18:26.990 "abort": false, 00:18:26.990 "seek_hole": true, 00:18:26.990 "seek_data": true, 00:18:26.990 "copy": false, 00:18:26.990 "nvme_iov_md": false 00:18:26.990 }, 00:18:26.990 "driver_specific": { 00:18:26.990 "lvol": { 00:18:26.990 "lvol_store_uuid": "4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e", 00:18:26.990 "base_bdev": "nvme0n1", 00:18:26.990 "thin_provision": true, 00:18:26.990 "num_allocated_clusters": 0, 00:18:26.990 "snapshot": false, 00:18:26.990 "clone": false, 00:18:26.990 "esnap_clone": false 00:18:26.990 } 00:18:26.990 } 00:18:26.990 } 00:18:26.990 ]' 00:18:26.990 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:26.990 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:26.990 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:27.246 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:27.246 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:27.246 17:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:27.247 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:27.247 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:27.247 17:24:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c2379228-b3d7-474a-9eb6-c34849b21760 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c2379228-b3d7-474a-9eb6-c34849b21760 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:27.504 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2379228-b3d7-474a-9eb6-c34849b21760 00:18:27.761 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:27.761 { 00:18:27.761 "name": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:27.761 "aliases": [ 00:18:27.761 "lvs/nvme0n1p0" 00:18:27.761 ], 00:18:27.761 "product_name": "Logical Volume", 00:18:27.761 "block_size": 4096, 00:18:27.761 "num_blocks": 26476544, 00:18:27.761 "uuid": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:27.761 "assigned_rate_limits": { 00:18:27.761 "rw_ios_per_sec": 0, 00:18:27.761 "rw_mbytes_per_sec": 0, 00:18:27.761 "r_mbytes_per_sec": 0, 00:18:27.761 "w_mbytes_per_sec": 0 00:18:27.761 }, 00:18:27.761 "claimed": false, 00:18:27.761 "zoned": false, 00:18:27.761 "supported_io_types": { 00:18:27.761 "read": true, 00:18:27.761 "write": true, 00:18:27.761 "unmap": true, 00:18:27.761 "flush": false, 00:18:27.761 "reset": true, 00:18:27.761 "nvme_admin": false, 00:18:27.761 "nvme_io": false, 00:18:27.761 "nvme_io_md": false, 00:18:27.761 "write_zeroes": true, 00:18:27.761 "zcopy": false, 00:18:27.761 "get_zone_info": false, 00:18:27.761 "zone_management": false, 00:18:27.761 "zone_append": false, 00:18:27.761 "compare": false, 00:18:27.761 "compare_and_write": false, 00:18:27.761 "abort": false, 00:18:27.761 "seek_hole": true, 00:18:27.761 "seek_data": true, 00:18:27.761 "copy": false, 00:18:27.761 "nvme_iov_md": false 00:18:27.761 }, 00:18:27.761 "driver_specific": { 00:18:27.761 "lvol": { 00:18:27.761 "lvol_store_uuid": "4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e", 00:18:27.761 "base_bdev": "nvme0n1", 00:18:27.761 "thin_provision": true, 00:18:27.761 "num_allocated_clusters": 0, 00:18:27.761 "snapshot": false, 00:18:27.761 "clone": false, 00:18:27.761 "esnap_clone": false 00:18:27.761 } 00:18:27.761 } 00:18:27.761 } 00:18:27.761 ]' 00:18:27.761 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:27.761 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:27.762 17:24:38 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size c2379228-b3d7-474a-9eb6-c34849b21760 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c2379228-b3d7-474a-9eb6-c34849b21760 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:28.019 17:24:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2379228-b3d7-474a-9eb6-c34849b21760 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:28.277 { 00:18:28.277 "name": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:28.277 "aliases": [ 00:18:28.277 "lvs/nvme0n1p0" 00:18:28.277 ], 00:18:28.277 "product_name": "Logical Volume", 00:18:28.277 "block_size": 4096, 00:18:28.277 "num_blocks": 26476544, 00:18:28.277 "uuid": "c2379228-b3d7-474a-9eb6-c34849b21760", 00:18:28.277 "assigned_rate_limits": { 00:18:28.277 "rw_ios_per_sec": 0, 00:18:28.277 "rw_mbytes_per_sec": 0, 00:18:28.277 "r_mbytes_per_sec": 0, 00:18:28.277 "w_mbytes_per_sec": 0 00:18:28.277 }, 00:18:28.277 "claimed": false, 00:18:28.277 "zoned": false, 00:18:28.277 "supported_io_types": { 00:18:28.277 "read": true, 00:18:28.277 "write": true, 00:18:28.277 "unmap": true, 00:18:28.277 "flush": false, 00:18:28.277 "reset": true, 00:18:28.277 "nvme_admin": false, 00:18:28.277 "nvme_io": false, 00:18:28.277 "nvme_io_md": false, 00:18:28.277 "write_zeroes": true, 00:18:28.277 "zcopy": false, 00:18:28.277 "get_zone_info": false, 00:18:28.277 "zone_management": false, 00:18:28.277 "zone_append": false, 00:18:28.277 "compare": false, 00:18:28.277 "compare_and_write": false, 00:18:28.277 "abort": false, 00:18:28.277 "seek_hole": true, 00:18:28.277 "seek_data": true, 00:18:28.277 "copy": false, 00:18:28.277 "nvme_iov_md": false 00:18:28.277 }, 00:18:28.277 "driver_specific": { 00:18:28.277 "lvol": { 00:18:28.277 "lvol_store_uuid": "4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e", 00:18:28.277 "base_bdev": "nvme0n1", 00:18:28.277 "thin_provision": true, 00:18:28.277 "num_allocated_clusters": 0, 00:18:28.277 "snapshot": false, 00:18:28.277 "clone": false, 00:18:28.277 "esnap_clone": false 00:18:28.277 } 00:18:28.277 } 00:18:28.277 } 00:18:28.277 ]' 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:28.277 17:24:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c2379228-b3d7-474a-9eb6-c34849b21760 -c nvc0n1p0 --l2p_dram_limit 20 00:18:28.535 [2024-07-15 17:24:39.354854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.354921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.535 [2024-07-15 17:24:39.354948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:28.535 [2024-07-15 17:24:39.354963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.355056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.355077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.535 [2024-07-15 17:24:39.355097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:28.535 [2024-07-15 17:24:39.355110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.355150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.535 [2024-07-15 17:24:39.355541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.535 [2024-07-15 17:24:39.355582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.355597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.535 [2024-07-15 17:24:39.355612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:18:28.535 [2024-07-15 17:24:39.355633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.355797] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b0abc462-3639-4e2b-8bcf-40b334327218 00:18:28.535 [2024-07-15 17:24:39.357664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.357711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:28.535 [2024-07-15 17:24:39.357730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:28.535 [2024-07-15 17:24:39.357745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.367328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.367400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.535 [2024-07-15 17:24:39.367421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.531 ms 00:18:28.535 [2024-07-15 17:24:39.367456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.367610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.367634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.535 [2024-07-15 17:24:39.367649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:28.535 [2024-07-15 17:24:39.367678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.367777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.367815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.535 [2024-07-15 17:24:39.367830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:28.535 [2024-07-15 17:24:39.367845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.367879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.535 [2024-07-15 17:24:39.370159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.370196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.535 [2024-07-15 17:24:39.370217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:18:28.535 [2024-07-15 17:24:39.370230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.370277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.370304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.535 [2024-07-15 17:24:39.370332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:28.535 [2024-07-15 17:24:39.370345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.370392] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:28.535 [2024-07-15 17:24:39.370588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.535 [2024-07-15 17:24:39.370626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.535 [2024-07-15 17:24:39.370644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:28.535 [2024-07-15 17:24:39.370664] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.535 [2024-07-15 17:24:39.370679] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.535 [2024-07-15 17:24:39.370699] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.535 [2024-07-15 17:24:39.370711] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.535 [2024-07-15 17:24:39.370726] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.535 [2024-07-15 17:24:39.370738] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.535 [2024-07-15 17:24:39.370755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.370768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.535 [2024-07-15 17:24:39.370784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:18:28.535 [2024-07-15 17:24:39.370800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.370902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.535 [2024-07-15 17:24:39.370918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.535 [2024-07-15 17:24:39.370933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:28.535 [2024-07-15 17:24:39.370948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.535 [2024-07-15 17:24:39.371066] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.535 [2024-07-15 17:24:39.371086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.535 [2024-07-15 17:24:39.371103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.535 [2024-07-15 17:24:39.371115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.535 [2024-07-15 17:24:39.371144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.535 [2024-07-15 17:24:39.371171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.535 [2024-07-15 17:24:39.371185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.535 [2024-07-15 17:24:39.371211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.535 [2024-07-15 17:24:39.371223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.535 [2024-07-15 17:24:39.371240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.535 [2024-07-15 17:24:39.371263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.535 [2024-07-15 17:24:39.371278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.535 [2024-07-15 17:24:39.371289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.535 [2024-07-15 17:24:39.371315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.535 [2024-07-15 17:24:39.371329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.535 [2024-07-15 17:24:39.371356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.535 [2024-07-15 17:24:39.371677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.535 [2024-07-15 17:24:39.371719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.535 [2024-07-15 17:24:39.371853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.535 [2024-07-15 17:24:39.371905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.535 [2024-07-15 17:24:39.371948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.535 [2024-07-15 17:24:39.372060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.535 [2024-07-15 17:24:39.372120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.535 [2024-07-15 17:24:39.372162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.535 [2024-07-15 17:24:39.372282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.535 [2024-07-15 17:24:39.372334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.536 [2024-07-15 17:24:39.372417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.536 [2024-07-15 17:24:39.372524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.536 [2024-07-15 17:24:39.372635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.536 [2024-07-15 17:24:39.372687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.536 [2024-07-15 17:24:39.372729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.536 [2024-07-15 17:24:39.372903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.536 [2024-07-15 17:24:39.372959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.536 [2024-07-15 17:24:39.372999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.536 [2024-07-15 17:24:39.373072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.536 [2024-07-15 17:24:39.373197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.536 [2024-07-15 17:24:39.373218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.536 [2024-07-15 17:24:39.373231] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.536 [2024-07-15 17:24:39.373250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.536 [2024-07-15 17:24:39.373273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.536 [2024-07-15 17:24:39.373289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.536 [2024-07-15 17:24:39.373305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.536 [2024-07-15 17:24:39.373319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.536 [2024-07-15 17:24:39.373330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.536 [2024-07-15 17:24:39.373343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.536 [2024-07-15 17:24:39.373355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.536 [2024-07-15 17:24:39.373390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.536 [2024-07-15 17:24:39.373409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.536 [2024-07-15 17:24:39.373427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.536 [2024-07-15 17:24:39.373457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.536 [2024-07-15 17:24:39.373469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.536 [2024-07-15 17:24:39.373493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.536 [2024-07-15 17:24:39.373505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.536 [2024-07-15 17:24:39.373522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.536 [2024-07-15 17:24:39.373535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.536 [2024-07-15 17:24:39.373549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.536 [2024-07-15 17:24:39.373562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.536 [2024-07-15 17:24:39.373576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.536 [2024-07-15 17:24:39.373645] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.536 [2024-07-15 17:24:39.373661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.536 [2024-07-15 17:24:39.373705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.536 [2024-07-15 17:24:39.373718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.536 [2024-07-15 17:24:39.373733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.536 [2024-07-15 17:24:39.373748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.536 [2024-07-15 17:24:39.373768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.536 [2024-07-15 17:24:39.373781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:18:28.536 [2024-07-15 17:24:39.373796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.536 [2024-07-15 17:24:39.373878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:28.536 [2024-07-15 17:24:39.373902] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:31.819 [2024-07-15 17:24:42.472915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.473002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:31.819 [2024-07-15 17:24:42.473048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3099.043 ms 00:18:31.819 [2024-07-15 17:24:42.473077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.498091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.498163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.819 [2024-07-15 17:24:42.498187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.899 ms 00:18:31.819 [2024-07-15 17:24:42.498209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.498396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.498423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:31.819 [2024-07-15 17:24:42.498439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:31.819 [2024-07-15 17:24:42.498454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.511068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.511137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.819 [2024-07-15 17:24:42.511159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.539 ms 00:18:31.819 [2024-07-15 17:24:42.511179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.511247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.511274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.819 [2024-07-15 17:24:42.511288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:31.819 [2024-07-15 17:24:42.511303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.511960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.512000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.819 [2024-07-15 17:24:42.512019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:18:31.819 [2024-07-15 17:24:42.512037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.512206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.512230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.819 [2024-07-15 17:24:42.512243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:31.819 [2024-07-15 17:24:42.512258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.519934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.520010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.819 [2024-07-15 17:24:42.520030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.639 ms 00:18:31.819 [2024-07-15 17:24:42.520045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.530409] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:31.819 [2024-07-15 17:24:42.538099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.538160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.819 [2024-07-15 17:24:42.538186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:18:31.819 [2024-07-15 17:24:42.538200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.597701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.597779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:31.819 [2024-07-15 17:24:42.597808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.435 ms 00:18:31.819 [2024-07-15 17:24:42.597822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.598098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.598123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:31.819 [2024-07-15 17:24:42.598152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:18:31.819 [2024-07-15 17:24:42.598165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.602297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.819 [2024-07-15 17:24:42.602341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:31.819 [2024-07-15 17:24:42.602390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.095 ms 00:18:31.819 [2024-07-15 17:24:42.602408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.819 [2024-07-15 17:24:42.605661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.605703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:31.820 [2024-07-15 17:24:42.605725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:18:31.820 [2024-07-15 17:24:42.605738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.606152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.606187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:31.820 [2024-07-15 17:24:42.606211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:31.820 [2024-07-15 17:24:42.606225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.647996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.648063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:31.820 [2024-07-15 17:24:42.648094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.727 ms 00:18:31.820 [2024-07-15 17:24:42.648107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.653440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.653485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:31.820 [2024-07-15 17:24:42.653507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.265 ms 00:18:31.820 [2024-07-15 17:24:42.653521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.657289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.657355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:31.820 [2024-07-15 17:24:42.657409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:18:31.820 [2024-07-15 17:24:42.657429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.661538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.661582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:31.820 [2024-07-15 17:24:42.661606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.053 ms 00:18:31.820 [2024-07-15 17:24:42.661620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.661676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.661695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:31.820 [2024-07-15 17:24:42.661713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:31.820 [2024-07-15 17:24:42.661725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.661831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.820 [2024-07-15 17:24:42.661860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:31.820 [2024-07-15 17:24:42.661889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:31.820 [2024-07-15 17:24:42.661902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.820 [2024-07-15 17:24:42.663252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3307.895 ms, result 0 00:18:31.820 { 00:18:31.820 "name": "ftl0", 00:18:31.820 "uuid": "b0abc462-3639-4e2b-8bcf-40b334327218" 00:18:31.820 } 00:18:32.077 17:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:32.077 17:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:32.077 17:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:32.335 17:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:32.335 [2024-07-15 17:24:43.113702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:32.335 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:32.335 Zero copy mechanism will not be used. 00:18:32.335 Running I/O for 4 seconds... 00:18:36.545 00:18:36.545 Latency(us) 00:18:36.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.545 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:36.545 ftl0 : 4.00 1510.49 100.31 0.00 0.00 694.39 320.23 6196.13 00:18:36.545 =================================================================================================================== 00:18:36.545 Total : 1510.49 100.31 0.00 0.00 694.39 320.23 6196.13 00:18:36.545 [2024-07-15 17:24:47.121894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:36.545 0 00:18:36.545 17:24:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:36.545 [2024-07-15 17:24:47.265191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:36.545 Running I/O for 4 seconds... 00:18:40.726 00:18:40.726 Latency(us) 00:18:40.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.726 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.727 ftl0 : 4.02 7022.05 27.43 0.00 0.00 18170.51 338.85 67204.19 00:18:40.727 =================================================================================================================== 00:18:40.727 Total : 7022.05 27.43 0.00 0.00 18170.51 0.00 67204.19 00:18:40.727 0 00:18:40.727 [2024-07-15 17:24:51.297163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:40.727 17:24:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:40.727 [2024-07-15 17:24:51.435998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:40.727 Running I/O for 4 seconds... 00:18:44.904 00:18:44.904 Latency(us) 00:18:44.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.904 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:44.904 Verification LBA range: start 0x0 length 0x1400000 00:18:44.904 ftl0 : 4.01 6012.13 23.48 0.00 0.00 21216.65 372.36 25499.46 00:18:44.904 =================================================================================================================== 00:18:44.904 Total : 6012.13 23.48 0.00 0.00 21216.65 0.00 25499.46 00:18:44.904 [2024-07-15 17:24:55.456603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:44.904 0 00:18:44.904 17:24:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:44.904 [2024-07-15 17:24:55.741883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.904 [2024-07-15 17:24:55.742167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.904 [2024-07-15 17:24:55.742383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.904 [2024-07-15 17:24:55.742450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.904 [2024-07-15 17:24:55.742673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.904 [2024-07-15 17:24:55.743648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.904 [2024-07-15 17:24:55.743821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.904 [2024-07-15 17:24:55.743944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:18:44.904 [2024-07-15 17:24:55.744002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.904 [2024-07-15 17:24:55.745781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.904 [2024-07-15 17:24:55.745960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.904 [2024-07-15 17:24:55.746085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:18:44.904 [2024-07-15 17:24:55.746146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.941751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.942092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.163 [2024-07-15 17:24:55.942226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.537 ms 00:18:45.163 [2024-07-15 17:24:55.942286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.948914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.949102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.163 [2024-07-15 17:24:55.949140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.529 ms 00:18:45.163 [2024-07-15 17:24:55.949159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.951312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.951389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.163 [2024-07-15 17:24:55.951410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:18:45.163 [2024-07-15 17:24:55.951425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.957031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.957101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.163 [2024-07-15 17:24:55.957122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:18:45.163 [2024-07-15 17:24:55.957144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.957287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.957312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.163 [2024-07-15 17:24:55.957327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:45.163 [2024-07-15 17:24:55.957346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.959749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.959796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:45.163 [2024-07-15 17:24:55.959814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:18:45.163 [2024-07-15 17:24:55.959830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.961405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.961457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:45.163 [2024-07-15 17:24:55.961481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:18:45.163 [2024-07-15 17:24:55.961496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.163 [2024-07-15 17:24:55.962781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.163 [2024-07-15 17:24:55.962828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.163 [2024-07-15 17:24:55.962848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:18:45.164 [2024-07-15 17:24:55.962872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.164 [2024-07-15 17:24:55.964208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.164 [2024-07-15 17:24:55.964254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.164 [2024-07-15 17:24:55.964271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:18:45.164 [2024-07-15 17:24:55.964288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.164 [2024-07-15 17:24:55.964328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.164 [2024-07-15 17:24:55.964370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.964992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.164 [2024-07-15 17:24:55.965453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.165 [2024-07-15 17:24:55.965962] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.165 [2024-07-15 17:24:55.965974] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0abc462-3639-4e2b-8bcf-40b334327218 00:18:45.165 [2024-07-15 17:24:55.965990] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.165 [2024-07-15 17:24:55.966002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.165 [2024-07-15 17:24:55.966029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.165 [2024-07-15 17:24:55.966042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.165 [2024-07-15 17:24:55.966058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.165 [2024-07-15 17:24:55.966079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.165 [2024-07-15 17:24:55.966094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.165 [2024-07-15 17:24:55.966105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.165 [2024-07-15 17:24:55.966118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.165 [2024-07-15 17:24:55.966138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.165 [2024-07-15 17:24:55.966165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.165 [2024-07-15 17:24:55.966179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:18:45.165 [2024-07-15 17:24:55.966208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.968940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.165 [2024-07-15 17:24:55.969120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.165 [2024-07-15 17:24:55.969268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:18:45.165 [2024-07-15 17:24:55.969324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.969543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.165 [2024-07-15 17:24:55.969618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.165 [2024-07-15 17:24:55.969724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:45.165 [2024-07-15 17:24:55.969792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.977292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:55.977497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.165 [2024-07-15 17:24:55.977677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:55.977739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.977920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:55.977986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.165 [2024-07-15 17:24:55.978051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:55.978100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.978290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:55.978375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.165 [2024-07-15 17:24:55.978434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:55.978565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.978650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:55.978711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.165 [2024-07-15 17:24:55.978850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:55.978963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:55.992143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:55.992462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.165 [2024-07-15 17:24:55.992599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:55.992773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.002901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.002968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.165 [2024-07-15 17:24:56.002990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.003155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.165 [2024-07-15 17:24:56.003170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.003273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.165 [2024-07-15 17:24:56.003291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.003460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.165 [2024-07-15 17:24:56.003476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.003577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.165 [2024-07-15 17:24:56.003594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.165 [2024-07-15 17:24:56.003679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.165 [2024-07-15 17:24:56.003692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.165 [2024-07-15 17:24:56.003719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.165 [2024-07-15 17:24:56.003776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.166 [2024-07-15 17:24:56.003798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.166 [2024-07-15 17:24:56.003815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.166 [2024-07-15 17:24:56.003837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.166 [2024-07-15 17:24:56.004001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.082 ms, result 0 00:18:45.166 true 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 92260 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 92260 ']' 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 92260 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92260 00:18:45.424 killing process with pid 92260 00:18:45.424 Received shutdown signal, test time was about 4.000000 seconds 00:18:45.424 00:18:45.424 Latency(us) 00:18:45.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.424 =================================================================================================================== 00:18:45.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92260' 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 92260 00:18:45.424 17:24:56 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 92260 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 Remove shared memory files 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:48.706 ************************************ 00:18:48.706 END TEST ftl_bdevperf 00:18:48.706 ************************************ 00:18:48.706 00:18:48.706 real 0m24.487s 00:18:48.706 user 0m27.909s 00:18:48.706 sys 0m1.269s 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.706 17:24:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 17:24:59 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:48.706 17:24:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:48.706 17:24:59 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:48.706 17:24:59 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.706 17:24:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 ************************************ 00:18:48.706 START TEST ftl_trim 00:18:48.706 ************************************ 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:48.706 * Looking for test storage... 00:18:48.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=92616 00:18:48.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 92616 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 92616 ']' 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.706 17:24:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:48.706 17:24:59 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:48.706 [2024-07-15 17:24:59.349449] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:18:48.706 [2024-07-15 17:24:59.349665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92616 ] 00:18:48.706 [2024-07-15 17:24:59.505814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:48.706 [2024-07-15 17:24:59.528468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.964 [2024-07-15 17:24:59.660986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.964 [2024-07-15 17:24:59.661105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.964 [2024-07-15 17:24:59.661173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.529 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.529 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:49.529 17:25:00 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:49.786 17:25:00 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:49.786 17:25:00 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:49.786 17:25:00 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:49.786 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:49.786 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:49.786 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:49.786 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:49.786 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:50.352 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:50.352 { 00:18:50.352 "name": "nvme0n1", 00:18:50.352 "aliases": [ 00:18:50.352 "b1b48436-e542-4c9b-9cfd-cd4c3c26c954" 00:18:50.352 ], 00:18:50.352 "product_name": "NVMe disk", 00:18:50.352 "block_size": 4096, 00:18:50.352 "num_blocks": 1310720, 00:18:50.352 "uuid": "b1b48436-e542-4c9b-9cfd-cd4c3c26c954", 00:18:50.352 "assigned_rate_limits": { 00:18:50.352 "rw_ios_per_sec": 0, 00:18:50.352 "rw_mbytes_per_sec": 0, 00:18:50.352 "r_mbytes_per_sec": 0, 00:18:50.352 "w_mbytes_per_sec": 0 00:18:50.352 }, 00:18:50.353 "claimed": true, 00:18:50.353 "claim_type": "read_many_write_one", 00:18:50.353 "zoned": false, 00:18:50.353 "supported_io_types": { 00:18:50.353 "read": true, 00:18:50.353 "write": true, 00:18:50.353 "unmap": true, 00:18:50.353 "flush": true, 00:18:50.353 "reset": true, 00:18:50.353 "nvme_admin": true, 00:18:50.353 "nvme_io": true, 00:18:50.353 "nvme_io_md": false, 00:18:50.353 "write_zeroes": true, 00:18:50.353 "zcopy": false, 00:18:50.353 "get_zone_info": false, 00:18:50.353 "zone_management": false, 00:18:50.353 "zone_append": false, 00:18:50.353 "compare": true, 00:18:50.353 "compare_and_write": false, 00:18:50.353 "abort": true, 00:18:50.353 "seek_hole": false, 00:18:50.353 "seek_data": false, 00:18:50.353 "copy": true, 00:18:50.353 "nvme_iov_md": false 00:18:50.353 }, 00:18:50.353 "driver_specific": { 00:18:50.353 "nvme": [ 00:18:50.353 { 00:18:50.353 "pci_address": "0000:00:11.0", 00:18:50.353 "trid": { 00:18:50.353 "trtype": "PCIe", 00:18:50.353 "traddr": "0000:00:11.0" 00:18:50.353 }, 00:18:50.353 "ctrlr_data": { 00:18:50.353 "cntlid": 0, 00:18:50.353 "vendor_id": "0x1b36", 00:18:50.353 "model_number": "QEMU NVMe Ctrl", 00:18:50.353 "serial_number": "12341", 00:18:50.353 "firmware_revision": "8.0.0", 00:18:50.353 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:50.353 "oacs": { 00:18:50.353 "security": 0, 00:18:50.353 "format": 1, 00:18:50.353 "firmware": 0, 00:18:50.353 "ns_manage": 1 00:18:50.353 }, 00:18:50.353 "multi_ctrlr": false, 00:18:50.353 "ana_reporting": false 00:18:50.353 }, 00:18:50.353 "vs": { 00:18:50.353 "nvme_version": "1.4" 00:18:50.353 }, 00:18:50.353 "ns_data": { 00:18:50.353 "id": 1, 00:18:50.353 "can_share": false 00:18:50.353 } 00:18:50.353 } 00:18:50.353 ], 00:18:50.353 "mp_policy": "active_passive" 00:18:50.353 } 00:18:50.353 } 00:18:50.353 ]' 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:50.353 17:25:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:50.353 17:25:00 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:50.353 17:25:00 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:50.353 17:25:00 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:50.353 17:25:00 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:50.353 17:25:00 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:50.610 17:25:01 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e 00:18:50.610 17:25:01 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:50.610 17:25:01 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d59fff5-347d-42f7-9e6e-4ac7c0db3f8e 00:18:50.867 17:25:01 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:51.125 17:25:01 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=15cf0dda-ed68-4563-84f8-52cd4d47ba11 00:18:51.125 17:25:01 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15cf0dda-ed68-4563-84f8-52cd4d47ba11 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:51.382 17:25:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.382 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.382 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:51.382 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:51.382 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:51.382 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:51.639 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:51.639 { 00:18:51.639 "name": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:51.639 "aliases": [ 00:18:51.639 "lvs/nvme0n1p0" 00:18:51.639 ], 00:18:51.639 "product_name": "Logical Volume", 00:18:51.639 "block_size": 4096, 00:18:51.639 "num_blocks": 26476544, 00:18:51.639 "uuid": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:51.639 "assigned_rate_limits": { 00:18:51.639 "rw_ios_per_sec": 0, 00:18:51.639 "rw_mbytes_per_sec": 0, 00:18:51.639 "r_mbytes_per_sec": 0, 00:18:51.639 "w_mbytes_per_sec": 0 00:18:51.639 }, 00:18:51.639 "claimed": false, 00:18:51.639 "zoned": false, 00:18:51.639 "supported_io_types": { 00:18:51.639 "read": true, 00:18:51.639 "write": true, 00:18:51.639 "unmap": true, 00:18:51.639 "flush": false, 00:18:51.639 "reset": true, 00:18:51.639 "nvme_admin": false, 00:18:51.639 "nvme_io": false, 00:18:51.639 "nvme_io_md": false, 00:18:51.639 "write_zeroes": true, 00:18:51.639 "zcopy": false, 00:18:51.639 "get_zone_info": false, 00:18:51.639 "zone_management": false, 00:18:51.639 "zone_append": false, 00:18:51.639 "compare": false, 00:18:51.639 "compare_and_write": false, 00:18:51.639 "abort": false, 00:18:51.639 "seek_hole": true, 00:18:51.639 "seek_data": true, 00:18:51.639 "copy": false, 00:18:51.639 "nvme_iov_md": false 00:18:51.639 }, 00:18:51.639 "driver_specific": { 00:18:51.639 "lvol": { 00:18:51.639 "lvol_store_uuid": "15cf0dda-ed68-4563-84f8-52cd4d47ba11", 00:18:51.639 "base_bdev": "nvme0n1", 00:18:51.639 "thin_provision": true, 00:18:51.639 "num_allocated_clusters": 0, 00:18:51.639 "snapshot": false, 00:18:51.639 "clone": false, 00:18:51.639 "esnap_clone": false 00:18:51.639 } 00:18:51.639 } 00:18:51.639 } 00:18:51.639 ]' 00:18:51.639 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:51.639 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:51.639 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:51.897 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:51.897 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:51.897 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:51.897 17:25:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:51.897 17:25:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:51.897 17:25:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:52.154 17:25:02 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:52.154 17:25:02 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:52.154 17:25:02 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:52.154 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:52.154 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.154 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:52.154 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:52.154 17:25:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:52.412 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:52.412 { 00:18:52.412 "name": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:52.412 "aliases": [ 00:18:52.412 "lvs/nvme0n1p0" 00:18:52.412 ], 00:18:52.412 "product_name": "Logical Volume", 00:18:52.412 "block_size": 4096, 00:18:52.412 "num_blocks": 26476544, 00:18:52.412 "uuid": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:52.412 "assigned_rate_limits": { 00:18:52.412 "rw_ios_per_sec": 0, 00:18:52.412 "rw_mbytes_per_sec": 0, 00:18:52.412 "r_mbytes_per_sec": 0, 00:18:52.412 "w_mbytes_per_sec": 0 00:18:52.412 }, 00:18:52.412 "claimed": false, 00:18:52.412 "zoned": false, 00:18:52.412 "supported_io_types": { 00:18:52.412 "read": true, 00:18:52.412 "write": true, 00:18:52.412 "unmap": true, 00:18:52.412 "flush": false, 00:18:52.412 "reset": true, 00:18:52.412 "nvme_admin": false, 00:18:52.412 "nvme_io": false, 00:18:52.412 "nvme_io_md": false, 00:18:52.412 "write_zeroes": true, 00:18:52.412 "zcopy": false, 00:18:52.412 "get_zone_info": false, 00:18:52.412 "zone_management": false, 00:18:52.412 "zone_append": false, 00:18:52.412 "compare": false, 00:18:52.412 "compare_and_write": false, 00:18:52.412 "abort": false, 00:18:52.412 "seek_hole": true, 00:18:52.412 "seek_data": true, 00:18:52.412 "copy": false, 00:18:52.412 "nvme_iov_md": false 00:18:52.412 }, 00:18:52.412 "driver_specific": { 00:18:52.412 "lvol": { 00:18:52.412 "lvol_store_uuid": "15cf0dda-ed68-4563-84f8-52cd4d47ba11", 00:18:52.412 "base_bdev": "nvme0n1", 00:18:52.412 "thin_provision": true, 00:18:52.412 "num_allocated_clusters": 0, 00:18:52.412 "snapshot": false, 00:18:52.412 "clone": false, 00:18:52.412 "esnap_clone": false 00:18:52.412 } 00:18:52.412 } 00:18:52.412 } 00:18:52.412 ]' 00:18:52.412 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:52.412 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:52.412 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:52.670 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:52.670 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:52.670 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:52.670 17:25:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:52.670 17:25:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:52.927 17:25:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:52.927 17:25:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:52.927 17:25:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:52.927 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:52.927 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.927 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:52.927 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:52.927 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87491749-0edd-421d-af6d-8b3951aa8ee3 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.184 { 00:18:53.184 "name": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:53.184 "aliases": [ 00:18:53.184 "lvs/nvme0n1p0" 00:18:53.184 ], 00:18:53.184 "product_name": "Logical Volume", 00:18:53.184 "block_size": 4096, 00:18:53.184 "num_blocks": 26476544, 00:18:53.184 "uuid": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:53.184 "assigned_rate_limits": { 00:18:53.184 "rw_ios_per_sec": 0, 00:18:53.184 "rw_mbytes_per_sec": 0, 00:18:53.184 "r_mbytes_per_sec": 0, 00:18:53.184 "w_mbytes_per_sec": 0 00:18:53.184 }, 00:18:53.184 "claimed": false, 00:18:53.184 "zoned": false, 00:18:53.184 "supported_io_types": { 00:18:53.184 "read": true, 00:18:53.184 "write": true, 00:18:53.184 "unmap": true, 00:18:53.184 "flush": false, 00:18:53.184 "reset": true, 00:18:53.184 "nvme_admin": false, 00:18:53.184 "nvme_io": false, 00:18:53.184 "nvme_io_md": false, 00:18:53.184 "write_zeroes": true, 00:18:53.184 "zcopy": false, 00:18:53.184 "get_zone_info": false, 00:18:53.184 "zone_management": false, 00:18:53.184 "zone_append": false, 00:18:53.184 "compare": false, 00:18:53.184 "compare_and_write": false, 00:18:53.184 "abort": false, 00:18:53.184 "seek_hole": true, 00:18:53.184 "seek_data": true, 00:18:53.184 "copy": false, 00:18:53.184 "nvme_iov_md": false 00:18:53.184 }, 00:18:53.184 "driver_specific": { 00:18:53.184 "lvol": { 00:18:53.184 "lvol_store_uuid": "15cf0dda-ed68-4563-84f8-52cd4d47ba11", 00:18:53.184 "base_bdev": "nvme0n1", 00:18:53.184 "thin_provision": true, 00:18:53.184 "num_allocated_clusters": 0, 00:18:53.184 "snapshot": false, 00:18:53.184 "clone": false, 00:18:53.184 "esnap_clone": false 00:18:53.184 } 00:18:53.184 } 00:18:53.184 } 00:18:53.184 ]' 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.184 17:25:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.184 17:25:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:53.184 17:25:03 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 87491749-0edd-421d-af6d-8b3951aa8ee3 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:53.443 [2024-07-15 17:25:04.204815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.204930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.443 [2024-07-15 17:25:04.204972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:53.443 [2024-07-15 17:25:04.204997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.209119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.209190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.443 [2024-07-15 17:25:04.209221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.024 ms 00:18:53.443 [2024-07-15 17:25:04.209259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.209523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.443 [2024-07-15 17:25:04.209993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.443 [2024-07-15 17:25:04.210053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.210090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.443 [2024-07-15 17:25:04.210114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:18:53.443 [2024-07-15 17:25:04.210141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.210434] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:18:53.443 [2024-07-15 17:25:04.212645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.212702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:53.443 [2024-07-15 17:25:04.212736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:53.443 [2024-07-15 17:25:04.212762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.224142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.224249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.443 [2024-07-15 17:25:04.224283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.223 ms 00:18:53.443 [2024-07-15 17:25:04.224302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.224634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.224668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.443 [2024-07-15 17:25:04.224704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:53.443 [2024-07-15 17:25:04.224724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.224797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.224822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.443 [2024-07-15 17:25:04.224848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:53.443 [2024-07-15 17:25:04.224868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.224939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.443 [2024-07-15 17:25:04.227561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.227629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.443 [2024-07-15 17:25:04.227656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:18:53.443 [2024-07-15 17:25:04.227707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.227786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.227819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.443 [2024-07-15 17:25:04.227842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:53.443 [2024-07-15 17:25:04.227893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.227947] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:53.443 [2024-07-15 17:25:04.228194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.443 [2024-07-15 17:25:04.228232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.443 [2024-07-15 17:25:04.228270] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:53.443 [2024-07-15 17:25:04.228296] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.443 [2024-07-15 17:25:04.228324] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.443 [2024-07-15 17:25:04.228344] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.443 [2024-07-15 17:25:04.228393] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.443 [2024-07-15 17:25:04.228416] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.443 [2024-07-15 17:25:04.228444] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.443 [2024-07-15 17:25:04.228465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.228488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.443 [2024-07-15 17:25:04.228516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:53.443 [2024-07-15 17:25:04.228540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.228702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.443 [2024-07-15 17:25:04.228745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.443 [2024-07-15 17:25:04.228772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:53.443 [2024-07-15 17:25:04.228799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.443 [2024-07-15 17:25:04.228977] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.443 [2024-07-15 17:25:04.229033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.443 [2024-07-15 17:25:04.229075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.443 [2024-07-15 17:25:04.229156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.443 [2024-07-15 17:25:04.229227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.443 [2024-07-15 17:25:04.229270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.443 [2024-07-15 17:25:04.229295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.443 [2024-07-15 17:25:04.229316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.443 [2024-07-15 17:25:04.229343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.443 [2024-07-15 17:25:04.229386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.443 [2024-07-15 17:25:04.229414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.443 [2024-07-15 17:25:04.229460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.443 [2024-07-15 17:25:04.229530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.443 [2024-07-15 17:25:04.229603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.443 [2024-07-15 17:25:04.229665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.443 [2024-07-15 17:25:04.229741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.443 [2024-07-15 17:25:04.229787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.443 [2024-07-15 17:25:04.229810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.443 [2024-07-15 17:25:04.229836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.443 [2024-07-15 17:25:04.229856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.443 [2024-07-15 17:25:04.229880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.443 [2024-07-15 17:25:04.229900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.443 [2024-07-15 17:25:04.229926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.444 [2024-07-15 17:25:04.229948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.444 [2024-07-15 17:25:04.229974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.444 [2024-07-15 17:25:04.229997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.444 [2024-07-15 17:25:04.230023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.444 [2024-07-15 17:25:04.230043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.444 [2024-07-15 17:25:04.230065] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.444 [2024-07-15 17:25:04.230111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.444 [2024-07-15 17:25:04.230143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.444 [2024-07-15 17:25:04.230166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.444 [2024-07-15 17:25:04.230191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.444 [2024-07-15 17:25:04.230213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.444 [2024-07-15 17:25:04.230237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.444 [2024-07-15 17:25:04.230257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.444 [2024-07-15 17:25:04.230280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.444 [2024-07-15 17:25:04.230301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.444 [2024-07-15 17:25:04.230335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.444 [2024-07-15 17:25:04.230777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.231026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.444 [2024-07-15 17:25:04.231197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.444 [2024-07-15 17:25:04.231515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.444 [2024-07-15 17:25:04.231705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.444 [2024-07-15 17:25:04.231965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.444 [2024-07-15 17:25:04.232183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.444 [2024-07-15 17:25:04.232223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.444 [2024-07-15 17:25:04.232246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.444 [2024-07-15 17:25:04.232270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.444 [2024-07-15 17:25:04.232292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.444 [2024-07-15 17:25:04.232436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.444 [2024-07-15 17:25:04.232464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.444 [2024-07-15 17:25:04.232509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.444 [2024-07-15 17:25:04.232531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.444 [2024-07-15 17:25:04.232552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.444 [2024-07-15 17:25:04.232577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.444 [2024-07-15 17:25:04.232597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.444 [2024-07-15 17:25:04.232629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:18:53.444 [2024-07-15 17:25:04.232649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.444 [2024-07-15 17:25:04.232841] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:53.444 [2024-07-15 17:25:04.232872] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:56.730 [2024-07-15 17:25:07.219164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.730 [2024-07-15 17:25:07.219246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:56.730 [2024-07-15 17:25:07.219284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2986.314 ms 00:18:56.730 [2024-07-15 17:25:07.219298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.730 [2024-07-15 17:25:07.234415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.730 [2024-07-15 17:25:07.234481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.730 [2024-07-15 17:25:07.234506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.978 ms 00:18:56.730 [2024-07-15 17:25:07.234519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.730 [2024-07-15 17:25:07.234759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.730 [2024-07-15 17:25:07.234780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.731 [2024-07-15 17:25:07.234801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:56.731 [2024-07-15 17:25:07.234813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.258854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.258943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.731 [2024-07-15 17:25:07.258980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:18:56.731 [2024-07-15 17:25:07.258998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.259175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.259206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.731 [2024-07-15 17:25:07.259229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:56.731 [2024-07-15 17:25:07.259246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.259930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.259974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.731 [2024-07-15 17:25:07.259999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:18:56.731 [2024-07-15 17:25:07.260015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.260274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.260300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.731 [2024-07-15 17:25:07.260327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:18:56.731 [2024-07-15 17:25:07.260345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.271153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.271213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.731 [2024-07-15 17:25:07.271236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.727 ms 00:18:56.731 [2024-07-15 17:25:07.271271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.281634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:56.731 [2024-07-15 17:25:07.303453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.303533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:56.731 [2024-07-15 17:25:07.303556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.953 ms 00:18:56.731 [2024-07-15 17:25:07.303571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.374822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.374908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:56.731 [2024-07-15 17:25:07.374930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.110 ms 00:18:56.731 [2024-07-15 17:25:07.374953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.375225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.375251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:56.731 [2024-07-15 17:25:07.375266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:56.731 [2024-07-15 17:25:07.375280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.379264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.379314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:56.731 [2024-07-15 17:25:07.379332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:18:56.731 [2024-07-15 17:25:07.379347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.382758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.382805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:56.731 [2024-07-15 17:25:07.382823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:18:56.731 [2024-07-15 17:25:07.382839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.383293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.383323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:56.731 [2024-07-15 17:25:07.383338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:18:56.731 [2024-07-15 17:25:07.383355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.422833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.422922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:56.731 [2024-07-15 17:25:07.422944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.421 ms 00:18:56.731 [2024-07-15 17:25:07.422960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.428230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.428280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:56.731 [2024-07-15 17:25:07.428299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:18:56.731 [2024-07-15 17:25:07.428315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.432011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.432058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:56.731 [2024-07-15 17:25:07.432075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:18:56.731 [2024-07-15 17:25:07.432090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.436454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.436502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:56.731 [2024-07-15 17:25:07.436520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:18:56.731 [2024-07-15 17:25:07.436539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.731 [2024-07-15 17:25:07.436626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.731 [2024-07-15 17:25:07.436651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:56.731 [2024-07-15 17:25:07.436666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:56.732 [2024-07-15 17:25:07.436680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.732 [2024-07-15 17:25:07.436782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.732 [2024-07-15 17:25:07.436802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:56.732 [2024-07-15 17:25:07.436816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:56.732 [2024-07-15 17:25:07.436851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.732 [2024-07-15 17:25:07.438117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.732 [2024-07-15 17:25:07.439422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3232.986 ms, result 0 00:18:56.732 [2024-07-15 17:25:07.440199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:56.732 { 00:18:56.732 "name": "ftl0", 00:18:56.732 "uuid": "2767dea2-a9dd-4cc3-b756-0d973e3d9830" 00:18:56.732 } 00:18:56.732 17:25:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:56.732 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:56.990 17:25:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:57.248 [ 00:18:57.248 { 00:18:57.248 "name": "ftl0", 00:18:57.248 "aliases": [ 00:18:57.248 "2767dea2-a9dd-4cc3-b756-0d973e3d9830" 00:18:57.248 ], 00:18:57.248 "product_name": "FTL disk", 00:18:57.248 "block_size": 4096, 00:18:57.248 "num_blocks": 23592960, 00:18:57.248 "uuid": "2767dea2-a9dd-4cc3-b756-0d973e3d9830", 00:18:57.248 "assigned_rate_limits": { 00:18:57.248 "rw_ios_per_sec": 0, 00:18:57.248 "rw_mbytes_per_sec": 0, 00:18:57.248 "r_mbytes_per_sec": 0, 00:18:57.248 "w_mbytes_per_sec": 0 00:18:57.248 }, 00:18:57.248 "claimed": false, 00:18:57.248 "zoned": false, 00:18:57.248 "supported_io_types": { 00:18:57.248 "read": true, 00:18:57.248 "write": true, 00:18:57.248 "unmap": true, 00:18:57.248 "flush": true, 00:18:57.248 "reset": false, 00:18:57.248 "nvme_admin": false, 00:18:57.248 "nvme_io": false, 00:18:57.248 "nvme_io_md": false, 00:18:57.248 "write_zeroes": true, 00:18:57.248 "zcopy": false, 00:18:57.248 "get_zone_info": false, 00:18:57.248 "zone_management": false, 00:18:57.248 "zone_append": false, 00:18:57.248 "compare": false, 00:18:57.248 "compare_and_write": false, 00:18:57.248 "abort": false, 00:18:57.248 "seek_hole": false, 00:18:57.248 "seek_data": false, 00:18:57.248 "copy": false, 00:18:57.248 "nvme_iov_md": false 00:18:57.248 }, 00:18:57.248 "driver_specific": { 00:18:57.248 "ftl": { 00:18:57.248 "base_bdev": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:57.248 "cache": "nvc0n1p0" 00:18:57.248 } 00:18:57.248 } 00:18:57.248 } 00:18:57.248 ] 00:18:57.248 17:25:08 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:18:57.248 17:25:08 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:57.248 17:25:08 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:57.507 17:25:08 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:57.507 17:25:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:57.765 17:25:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:57.765 { 00:18:57.765 "name": "ftl0", 00:18:57.765 "aliases": [ 00:18:57.765 "2767dea2-a9dd-4cc3-b756-0d973e3d9830" 00:18:57.765 ], 00:18:57.765 "product_name": "FTL disk", 00:18:57.765 "block_size": 4096, 00:18:57.765 "num_blocks": 23592960, 00:18:57.765 "uuid": "2767dea2-a9dd-4cc3-b756-0d973e3d9830", 00:18:57.765 "assigned_rate_limits": { 00:18:57.765 "rw_ios_per_sec": 0, 00:18:57.765 "rw_mbytes_per_sec": 0, 00:18:57.765 "r_mbytes_per_sec": 0, 00:18:57.765 "w_mbytes_per_sec": 0 00:18:57.765 }, 00:18:57.765 "claimed": false, 00:18:57.765 "zoned": false, 00:18:57.765 "supported_io_types": { 00:18:57.765 "read": true, 00:18:57.765 "write": true, 00:18:57.765 "unmap": true, 00:18:57.765 "flush": true, 00:18:57.765 "reset": false, 00:18:57.765 "nvme_admin": false, 00:18:57.765 "nvme_io": false, 00:18:57.765 "nvme_io_md": false, 00:18:57.765 "write_zeroes": true, 00:18:57.765 "zcopy": false, 00:18:57.765 "get_zone_info": false, 00:18:57.765 "zone_management": false, 00:18:57.765 "zone_append": false, 00:18:57.765 "compare": false, 00:18:57.765 "compare_and_write": false, 00:18:57.765 "abort": false, 00:18:57.765 "seek_hole": false, 00:18:57.765 "seek_data": false, 00:18:57.765 "copy": false, 00:18:57.765 "nvme_iov_md": false 00:18:57.765 }, 00:18:57.765 "driver_specific": { 00:18:57.765 "ftl": { 00:18:57.765 "base_bdev": "87491749-0edd-421d-af6d-8b3951aa8ee3", 00:18:57.765 "cache": "nvc0n1p0" 00:18:57.765 } 00:18:57.765 } 00:18:57.765 } 00:18:57.765 ]' 00:18:57.765 17:25:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:58.023 17:25:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:58.023 17:25:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:58.283 [2024-07-15 17:25:08.944351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.944514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.283 [2024-07-15 17:25:08.944584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:58.283 [2024-07-15 17:25:08.944630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.944743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:58.283 [2024-07-15 17:25:08.945813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.945994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.283 [2024-07-15 17:25:08.946123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:18:58.283 [2024-07-15 17:25:08.946189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.946830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.946889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.283 [2024-07-15 17:25:08.946906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:18:58.283 [2024-07-15 17:25:08.946920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.950515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.950554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.283 [2024-07-15 17:25:08.950570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.561 ms 00:18:58.283 [2024-07-15 17:25:08.950584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.957959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.958004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.283 [2024-07-15 17:25:08.958020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.294 ms 00:18:58.283 [2024-07-15 17:25:08.958049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.959627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.959675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.283 [2024-07-15 17:25:08.959691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:18:58.283 [2024-07-15 17:25:08.959705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.964670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.964721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.283 [2024-07-15 17:25:08.964755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:18:58.283 [2024-07-15 17:25:08.964777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.964977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.965001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.283 [2024-07-15 17:25:08.965014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:58.283 [2024-07-15 17:25:08.965029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.967444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.967489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:58.283 [2024-07-15 17:25:08.967505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:18:58.283 [2024-07-15 17:25:08.967522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.969124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.969169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:58.283 [2024-07-15 17:25:08.969184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:18:58.283 [2024-07-15 17:25:08.969198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.283 [2024-07-15 17:25:08.970501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.283 [2024-07-15 17:25:08.970545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.283 [2024-07-15 17:25:08.970561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:18:58.284 [2024-07-15 17:25:08.970574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.284 [2024-07-15 17:25:08.971767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.284 [2024-07-15 17:25:08.971812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.284 [2024-07-15 17:25:08.971828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:18:58.284 [2024-07-15 17:25:08.971841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.284 [2024-07-15 17:25:08.971890] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.284 [2024-07-15 17:25:08.971918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.971954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.971973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.971986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.972995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.284 [2024-07-15 17:25:08.973128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.285 [2024-07-15 17:25:08.973434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.285 [2024-07-15 17:25:08.973446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:18:58.285 [2024-07-15 17:25:08.973467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.285 [2024-07-15 17:25:08.973483] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.285 [2024-07-15 17:25:08.973497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.285 [2024-07-15 17:25:08.973525] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.285 [2024-07-15 17:25:08.973540] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.285 [2024-07-15 17:25:08.973551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.285 [2024-07-15 17:25:08.973564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.285 [2024-07-15 17:25:08.973575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.285 [2024-07-15 17:25:08.973587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.285 [2024-07-15 17:25:08.973599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.285 [2024-07-15 17:25:08.973612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.285 [2024-07-15 17:25:08.973624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:18:58.285 [2024-07-15 17:25:08.973641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.975926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.285 [2024-07-15 17:25:08.975961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.285 [2024-07-15 17:25:08.975976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:18:58.285 [2024-07-15 17:25:08.975990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.976150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.285 [2024-07-15 17:25:08.976172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.285 [2024-07-15 17:25:08.976192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:58.285 [2024-07-15 17:25:08.976206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.984727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:08.984783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.285 [2024-07-15 17:25:08.984801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:08.984815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.984942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:08.984966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.285 [2024-07-15 17:25:08.984980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:08.984997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.985095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:08.985134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.285 [2024-07-15 17:25:08.985149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:08.985162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.985203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:08.985220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.285 [2024-07-15 17:25:08.985232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:08.985245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:08.999342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:08.999434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.285 [2024-07-15 17:25:08.999454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:08.999488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.009997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.285 [2024-07-15 17:25:09.010081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.010213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.285 [2024-07-15 17:25:09.010253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.010336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.285 [2024-07-15 17:25:09.010397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.010601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.285 [2024-07-15 17:25:09.010643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.010748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.285 [2024-07-15 17:25:09.010786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.010883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.010913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.285 [2024-07-15 17:25:09.010947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.010962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.011058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.285 [2024-07-15 17:25:09.011082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.285 [2024-07-15 17:25:09.011095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.285 [2024-07-15 17:25:09.011109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.285 [2024-07-15 17:25:09.011340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.974 ms, result 0 00:18:58.285 true 00:18:58.285 17:25:09 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 92616 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92616 ']' 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92616 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92616 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.285 killing process with pid 92616 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92616' 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 92616 00:18:58.285 17:25:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 92616 00:19:00.835 17:25:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:02.207 65536+0 records in 00:19:02.207 65536+0 records out 00:19:02.207 268435456 bytes (268 MB, 256 MiB) copied, 1.17741 s, 228 MB/s 00:19:02.207 17:25:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.207 [2024-07-15 17:25:12.750130] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:02.207 [2024-07-15 17:25:12.750303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92804 ] 00:19:02.207 [2024-07-15 17:25:12.892819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.207 [2024-07-15 17:25:12.909696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.207 [2024-07-15 17:25:13.003281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.467 [2024-07-15 17:25:13.129079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.467 [2024-07-15 17:25:13.129190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.467 [2024-07-15 17:25:13.290083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.290156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.467 [2024-07-15 17:25:13.290180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:02.467 [2024-07-15 17:25:13.290193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.293015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.293058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.467 [2024-07-15 17:25:13.293087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:19:02.467 [2024-07-15 17:25:13.293100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.293219] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.467 [2024-07-15 17:25:13.293554] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.467 [2024-07-15 17:25:13.293592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.293606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.467 [2024-07-15 17:25:13.293618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:19:02.467 [2024-07-15 17:25:13.293630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.295774] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.467 [2024-07-15 17:25:13.298790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.298832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.467 [2024-07-15 17:25:13.298861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:19:02.467 [2024-07-15 17:25:13.298880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.298993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.299014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.467 [2024-07-15 17:25:13.299032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:02.467 [2024-07-15 17:25:13.299047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.308075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.308122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.467 [2024-07-15 17:25:13.308138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.971 ms 00:19:02.467 [2024-07-15 17:25:13.308149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.308311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.308338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.467 [2024-07-15 17:25:13.308352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:02.467 [2024-07-15 17:25:13.308402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.308454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.308471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.467 [2024-07-15 17:25:13.308500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:02.467 [2024-07-15 17:25:13.308515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.308565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:02.467 [2024-07-15 17:25:13.310718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.310754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.467 [2024-07-15 17:25:13.310768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:19:02.467 [2024-07-15 17:25:13.310780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.310834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.310851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.467 [2024-07-15 17:25:13.310868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:02.467 [2024-07-15 17:25:13.310880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.310905] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.467 [2024-07-15 17:25:13.310940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.467 [2024-07-15 17:25:13.310989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.467 [2024-07-15 17:25:13.311011] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:02.467 [2024-07-15 17:25:13.311115] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.467 [2024-07-15 17:25:13.311130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.467 [2024-07-15 17:25:13.311146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:02.467 [2024-07-15 17:25:13.311173] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311191] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:02.467 [2024-07-15 17:25:13.311228] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.467 [2024-07-15 17:25:13.311238] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.467 [2024-07-15 17:25:13.311258] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.467 [2024-07-15 17:25:13.311270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.311285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.467 [2024-07-15 17:25:13.311297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:02.467 [2024-07-15 17:25:13.311308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.311419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.467 [2024-07-15 17:25:13.311437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.467 [2024-07-15 17:25:13.311454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:02.467 [2024-07-15 17:25:13.311486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.467 [2024-07-15 17:25:13.311603] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.467 [2024-07-15 17:25:13.311623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.467 [2024-07-15 17:25:13.311636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.467 [2024-07-15 17:25:13.311670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.467 [2024-07-15 17:25:13.311701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.467 [2024-07-15 17:25:13.311727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.467 [2024-07-15 17:25:13.311737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:02.467 [2024-07-15 17:25:13.311747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.467 [2024-07-15 17:25:13.311771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.467 [2024-07-15 17:25:13.311782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:02.467 [2024-07-15 17:25:13.311795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.467 [2024-07-15 17:25:13.311816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.467 [2024-07-15 17:25:13.311848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.467 [2024-07-15 17:25:13.311868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.467 [2024-07-15 17:25:13.311879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:02.467 [2024-07-15 17:25:13.311889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.468 [2024-07-15 17:25:13.311906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.468 [2024-07-15 17:25:13.311917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:02.468 [2024-07-15 17:25:13.311927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.468 [2024-07-15 17:25:13.311938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.468 [2024-07-15 17:25:13.311949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:02.468 [2024-07-15 17:25:13.311959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.468 [2024-07-15 17:25:13.311970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.468 [2024-07-15 17:25:13.311980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:02.468 [2024-07-15 17:25:13.311990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.468 [2024-07-15 17:25:13.312000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.468 [2024-07-15 17:25:13.312010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:02.468 [2024-07-15 17:25:13.312021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.468 [2024-07-15 17:25:13.312031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.468 [2024-07-15 17:25:13.312041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:02.468 [2024-07-15 17:25:13.312051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.468 [2024-07-15 17:25:13.312061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.468 [2024-07-15 17:25:13.312074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:02.468 [2024-07-15 17:25:13.312085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.468 [2024-07-15 17:25:13.312095] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.468 [2024-07-15 17:25:13.312111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.468 [2024-07-15 17:25:13.312122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.468 [2024-07-15 17:25:13.312133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.468 [2024-07-15 17:25:13.312149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.468 [2024-07-15 17:25:13.312160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.468 [2024-07-15 17:25:13.312170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.468 [2024-07-15 17:25:13.312181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.468 [2024-07-15 17:25:13.312197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.468 [2024-07-15 17:25:13.312208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.468 [2024-07-15 17:25:13.312220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.468 [2024-07-15 17:25:13.312234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:02.468 [2024-07-15 17:25:13.312258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:02.468 [2024-07-15 17:25:13.312276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:02.468 [2024-07-15 17:25:13.312288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:02.468 [2024-07-15 17:25:13.312299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:02.468 [2024-07-15 17:25:13.312311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:02.468 [2024-07-15 17:25:13.312322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:02.468 [2024-07-15 17:25:13.312334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:02.468 [2024-07-15 17:25:13.312345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:02.468 [2024-07-15 17:25:13.312370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:02.468 [2024-07-15 17:25:13.312431] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.468 [2024-07-15 17:25:13.312443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.468 [2024-07-15 17:25:13.312467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.468 [2024-07-15 17:25:13.312482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.468 [2024-07-15 17:25:13.312495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.468 [2024-07-15 17:25:13.312507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.468 [2024-07-15 17:25:13.312519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.468 [2024-07-15 17:25:13.312531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:19:02.468 [2024-07-15 17:25:13.312542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.336722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.336796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.727 [2024-07-15 17:25:13.336818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.070 ms 00:19:02.727 [2024-07-15 17:25:13.336831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.337051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.337083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.727 [2024-07-15 17:25:13.337106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:02.727 [2024-07-15 17:25:13.337117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.349851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.349918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.727 [2024-07-15 17:25:13.349938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:19:02.727 [2024-07-15 17:25:13.349951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.350090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.350112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.727 [2024-07-15 17:25:13.350131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:02.727 [2024-07-15 17:25:13.350151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.350739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.350768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.727 [2024-07-15 17:25:13.350789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:02.727 [2024-07-15 17:25:13.350801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.350975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.350995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.727 [2024-07-15 17:25:13.351007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:02.727 [2024-07-15 17:25:13.351019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.359270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.359333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.727 [2024-07-15 17:25:13.359388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.211 ms 00:19:02.727 [2024-07-15 17:25:13.359409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.362603] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:02.727 [2024-07-15 17:25:13.362651] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.727 [2024-07-15 17:25:13.362671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.362684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.727 [2024-07-15 17:25:13.362698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:19:02.727 [2024-07-15 17:25:13.362710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.378795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.378840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.727 [2024-07-15 17:25:13.378859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.012 ms 00:19:02.727 [2024-07-15 17:25:13.378879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.381084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.381124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.727 [2024-07-15 17:25:13.381140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:19:02.727 [2024-07-15 17:25:13.381152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.382791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.382830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.727 [2024-07-15 17:25:13.382846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:19:02.727 [2024-07-15 17:25:13.382875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.383392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.383422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.727 [2024-07-15 17:25:13.383437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:02.727 [2024-07-15 17:25:13.383449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.406808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.406887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.727 [2024-07-15 17:25:13.406908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.321 ms 00:19:02.727 [2024-07-15 17:25:13.406932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.415292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:02.727 [2024-07-15 17:25:13.436218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.436320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.727 [2024-07-15 17:25:13.436341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.134 ms 00:19:02.727 [2024-07-15 17:25:13.436353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.436525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.436551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.727 [2024-07-15 17:25:13.436566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:02.727 [2024-07-15 17:25:13.436577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.436658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.436677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.727 [2024-07-15 17:25:13.436691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:02.727 [2024-07-15 17:25:13.436702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.436739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.436763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.727 [2024-07-15 17:25:13.436782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:02.727 [2024-07-15 17:25:13.436794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.436849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.727 [2024-07-15 17:25:13.436883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.436895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.727 [2024-07-15 17:25:13.436908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:02.727 [2024-07-15 17:25:13.436929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.441334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.441394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.727 [2024-07-15 17:25:13.441420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:19:02.727 [2024-07-15 17:25:13.441433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.441532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.727 [2024-07-15 17:25:13.441552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.727 [2024-07-15 17:25:13.441566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:02.727 [2024-07-15 17:25:13.441592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.727 [2024-07-15 17:25:13.442775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.727 [2024-07-15 17:25:13.444004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.322 ms, result 0 00:19:02.727 [2024-07-15 17:25:13.444878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.727 [2024-07-15 17:25:13.453065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.532  Copying: 22/256 [MB] (22 MBps) Copying: 46/256 [MB] (24 MBps) Copying: 71/256 [MB] (24 MBps) Copying: 96/256 [MB] (24 MBps) Copying: 121/256 [MB] (24 MBps) Copying: 144/256 [MB] (23 MBps) Copying: 168/256 [MB] (23 MBps) Copying: 191/256 [MB] (23 MBps) Copying: 214/256 [MB] (22 MBps) Copying: 237/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-15 17:25:24.254553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.532 [2024-07-15 17:25:24.256273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.532 [2024-07-15 17:25:24.256318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:13.532 [2024-07-15 17:25:24.256339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.532 [2024-07-15 17:25:24.256351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.532 [2024-07-15 17:25:24.256420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:13.532 [2024-07-15 17:25:24.257298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.532 [2024-07-15 17:25:24.257343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:13.532 [2024-07-15 17:25:24.257384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:19:13.532 [2024-07-15 17:25:24.257404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.532 [2024-07-15 17:25:24.259281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.532 [2024-07-15 17:25:24.259324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:13.532 [2024-07-15 17:25:24.259354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:19:13.532 [2024-07-15 17:25:24.259383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.532 [2024-07-15 17:25:24.266692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.532 [2024-07-15 17:25:24.266738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:13.532 [2024-07-15 17:25:24.266778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.284 ms 00:19:13.532 [2024-07-15 17:25:24.266795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.532 [2024-07-15 17:25:24.274158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.532 [2024-07-15 17:25:24.274205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:13.532 [2024-07-15 17:25:24.274221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.284 ms 00:19:13.532 [2024-07-15 17:25:24.274233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.532 [2024-07-15 17:25:24.275956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.276002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:13.533 [2024-07-15 17:25:24.276017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:19:13.533 [2024-07-15 17:25:24.276029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.280224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.280268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:13.533 [2024-07-15 17:25:24.280285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:19:13.533 [2024-07-15 17:25:24.280298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.280463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.280488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:13.533 [2024-07-15 17:25:24.280502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:13.533 [2024-07-15 17:25:24.280519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.282799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.282841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:13.533 [2024-07-15 17:25:24.282872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:19:13.533 [2024-07-15 17:25:24.282884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.284610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.284672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:13.533 [2024-07-15 17:25:24.284688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:19:13.533 [2024-07-15 17:25:24.284699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.285997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.286039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.533 [2024-07-15 17:25:24.286054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:19:13.533 [2024-07-15 17:25:24.286066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.287318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.533 [2024-07-15 17:25:24.287373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.533 [2024-07-15 17:25:24.287391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:19:13.533 [2024-07-15 17:25:24.287402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.533 [2024-07-15 17:25:24.287444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.533 [2024-07-15 17:25:24.287467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.287990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.533 [2024-07-15 17:25:24.288329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.534 [2024-07-15 17:25:24.288699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.534 [2024-07-15 17:25:24.288723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:13.534 [2024-07-15 17:25:24.288735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.534 [2024-07-15 17:25:24.288747] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.534 [2024-07-15 17:25:24.288757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.534 [2024-07-15 17:25:24.288774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.534 [2024-07-15 17:25:24.288785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.534 [2024-07-15 17:25:24.288804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.534 [2024-07-15 17:25:24.288815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.534 [2024-07-15 17:25:24.288825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.534 [2024-07-15 17:25:24.288835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.534 [2024-07-15 17:25:24.288846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.534 [2024-07-15 17:25:24.288858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.534 [2024-07-15 17:25:24.288881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:19:13.534 [2024-07-15 17:25:24.288893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.291095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.534 [2024-07-15 17:25:24.291129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.534 [2024-07-15 17:25:24.291152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:19:13.534 [2024-07-15 17:25:24.291163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.291294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.534 [2024-07-15 17:25:24.291310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.534 [2024-07-15 17:25:24.291322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:13.534 [2024-07-15 17:25:24.291334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.298917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.298969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.534 [2024-07-15 17:25:24.298986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.298998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.299095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.299113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.534 [2024-07-15 17:25:24.299125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.299137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.299205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.299223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.534 [2024-07-15 17:25:24.299251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.299262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.299288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.299302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.534 [2024-07-15 17:25:24.299313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.299336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.313445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.313530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.534 [2024-07-15 17:25:24.313549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.313562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.323947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.534 [2024-07-15 17:25:24.324044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.534 [2024-07-15 17:25:24.324195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.534 [2024-07-15 17:25:24.324282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.534 [2024-07-15 17:25:24.324441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.534 [2024-07-15 17:25:24.324557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.534 [2024-07-15 17:25:24.324660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.534 [2024-07-15 17:25:24.324764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.534 [2024-07-15 17:25:24.324777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.534 [2024-07-15 17:25:24.324788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.534 [2024-07-15 17:25:24.324959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.651 ms, result 0 00:19:14.099 00:19:14.099 00:19:14.099 17:25:24 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=92929 00:19:14.099 17:25:24 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:14.099 17:25:24 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 92929 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 92929 ']' 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.099 17:25:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:14.099 [2024-07-15 17:25:24.866287] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:14.099 [2024-07-15 17:25:24.866509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92929 ] 00:19:14.356 [2024-07-15 17:25:25.018303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:14.356 [2024-07-15 17:25:25.039181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.356 [2024-07-15 17:25:25.130729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.921 17:25:25 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.921 17:25:25 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:14.921 17:25:25 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:15.487 [2024-07-15 17:25:26.050019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.487 [2024-07-15 17:25:26.050136] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.487 [2024-07-15 17:25:26.227457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.227535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:15.487 [2024-07-15 17:25:26.227557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:15.487 [2024-07-15 17:25:26.227574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.230410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.230501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.487 [2024-07-15 17:25:26.230527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:19:15.487 [2024-07-15 17:25:26.230542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.230642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:15.487 [2024-07-15 17:25:26.230945] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:15.487 [2024-07-15 17:25:26.230981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.230998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.487 [2024-07-15 17:25:26.231011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:15.487 [2024-07-15 17:25:26.231044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.233182] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:15.487 [2024-07-15 17:25:26.236191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.236241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:15.487 [2024-07-15 17:25:26.236262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:19:15.487 [2024-07-15 17:25:26.236275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.236403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.236424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:15.487 [2024-07-15 17:25:26.236444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:15.487 [2024-07-15 17:25:26.236458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.245288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.245341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.487 [2024-07-15 17:25:26.245371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.763 ms 00:19:15.487 [2024-07-15 17:25:26.245386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.245571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.245593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.487 [2024-07-15 17:25:26.245614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:15.487 [2024-07-15 17:25:26.245625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.245670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.245685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:15.487 [2024-07-15 17:25:26.245700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:15.487 [2024-07-15 17:25:26.245711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.245749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:15.487 [2024-07-15 17:25:26.247802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.247848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.487 [2024-07-15 17:25:26.247864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:19:15.487 [2024-07-15 17:25:26.247878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.247927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.487 [2024-07-15 17:25:26.247946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:15.487 [2024-07-15 17:25:26.247960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:15.487 [2024-07-15 17:25:26.247973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.487 [2024-07-15 17:25:26.248013] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:15.487 [2024-07-15 17:25:26.248054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:15.487 [2024-07-15 17:25:26.248107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:15.487 [2024-07-15 17:25:26.248139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:15.487 [2024-07-15 17:25:26.248244] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:15.487 [2024-07-15 17:25:26.248263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:15.487 [2024-07-15 17:25:26.248278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:15.487 [2024-07-15 17:25:26.248295] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:15.487 [2024-07-15 17:25:26.248310] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:15.487 [2024-07-15 17:25:26.248327] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:15.488 [2024-07-15 17:25:26.248339] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:15.488 [2024-07-15 17:25:26.248370] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:15.488 [2024-07-15 17:25:26.248394] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:15.488 [2024-07-15 17:25:26.248411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.248423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:15.488 [2024-07-15 17:25:26.248437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:19:15.488 [2024-07-15 17:25:26.248456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.248556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.248570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:15.488 [2024-07-15 17:25:26.248584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:15.488 [2024-07-15 17:25:26.248595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.248723] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:15.488 [2024-07-15 17:25:26.248753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:15.488 [2024-07-15 17:25:26.248770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.488 [2024-07-15 17:25:26.248782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.248799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:15.488 [2024-07-15 17:25:26.248810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.248823] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:15.488 [2024-07-15 17:25:26.248833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:15.488 [2024-07-15 17:25:26.248847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:15.488 [2024-07-15 17:25:26.248857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.488 [2024-07-15 17:25:26.248870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:15.488 [2024-07-15 17:25:26.248881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:15.488 [2024-07-15 17:25:26.248894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.488 [2024-07-15 17:25:26.248904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:15.488 [2024-07-15 17:25:26.248917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:15.488 [2024-07-15 17:25:26.248927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.248941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:15.488 [2024-07-15 17:25:26.248951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:15.488 [2024-07-15 17:25:26.248965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.248976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:15.488 [2024-07-15 17:25:26.248991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:15.488 [2024-07-15 17:25:26.249041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:15.488 [2024-07-15 17:25:26.249078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:15.488 [2024-07-15 17:25:26.249130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:15.488 [2024-07-15 17:25:26.249166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.488 [2024-07-15 17:25:26.249190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:15.488 [2024-07-15 17:25:26.249201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:15.488 [2024-07-15 17:25:26.249216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.488 [2024-07-15 17:25:26.249227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:15.488 [2024-07-15 17:25:26.249240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:15.488 [2024-07-15 17:25:26.249251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:15.488 [2024-07-15 17:25:26.249275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:15.488 [2024-07-15 17:25:26.249287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249298] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:15.488 [2024-07-15 17:25:26.249312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:15.488 [2024-07-15 17:25:26.249324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.488 [2024-07-15 17:25:26.249349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:15.488 [2024-07-15 17:25:26.249385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:15.488 [2024-07-15 17:25:26.249400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:15.488 [2024-07-15 17:25:26.249416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:15.488 [2024-07-15 17:25:26.249427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:15.488 [2024-07-15 17:25:26.249443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:15.488 [2024-07-15 17:25:26.249455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:15.488 [2024-07-15 17:25:26.249473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:15.488 [2024-07-15 17:25:26.249503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:15.488 [2024-07-15 17:25:26.249515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:15.488 [2024-07-15 17:25:26.249529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:15.488 [2024-07-15 17:25:26.249541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:15.488 [2024-07-15 17:25:26.249555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:15.488 [2024-07-15 17:25:26.249567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:15.488 [2024-07-15 17:25:26.249581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:15.488 [2024-07-15 17:25:26.249593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:15.488 [2024-07-15 17:25:26.249606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:15.488 [2024-07-15 17:25:26.249671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:15.488 [2024-07-15 17:25:26.249687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:15.488 [2024-07-15 17:25:26.249715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:15.488 [2024-07-15 17:25:26.249727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:15.488 [2024-07-15 17:25:26.249742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:15.488 [2024-07-15 17:25:26.249755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.249769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:15.488 [2024-07-15 17:25:26.249781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:15.488 [2024-07-15 17:25:26.249794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.265279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.265349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.488 [2024-07-15 17:25:26.265384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.370 ms 00:19:15.488 [2024-07-15 17:25:26.265417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.265607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.265646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:15.488 [2024-07-15 17:25:26.265660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:15.488 [2024-07-15 17:25:26.265674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.279652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.279715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.488 [2024-07-15 17:25:26.279736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.945 ms 00:19:15.488 [2024-07-15 17:25:26.279755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.279910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.488 [2024-07-15 17:25:26.279933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.488 [2024-07-15 17:25:26.279947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.488 [2024-07-15 17:25:26.279976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.488 [2024-07-15 17:25:26.280551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.280598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.489 [2024-07-15 17:25:26.280616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:15.489 [2024-07-15 17:25:26.280630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.280821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.280851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.489 [2024-07-15 17:25:26.280864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:15.489 [2024-07-15 17:25:26.280878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.290779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.290864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.489 [2024-07-15 17:25:26.290892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.869 ms 00:19:15.489 [2024-07-15 17:25:26.290908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.294070] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:15.489 [2024-07-15 17:25:26.294130] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:15.489 [2024-07-15 17:25:26.294176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.294192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:15.489 [2024-07-15 17:25:26.294206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:19:15.489 [2024-07-15 17:25:26.294220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.310577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.310628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:15.489 [2024-07-15 17:25:26.310646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.301 ms 00:19:15.489 [2024-07-15 17:25:26.310664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.312818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.312880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:15.489 [2024-07-15 17:25:26.312896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:19:15.489 [2024-07-15 17:25:26.312909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.314573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.314616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:15.489 [2024-07-15 17:25:26.314632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:19:15.489 [2024-07-15 17:25:26.314648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.489 [2024-07-15 17:25:26.315061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.489 [2024-07-15 17:25:26.315094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:15.489 [2024-07-15 17:25:26.315110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:19:15.489 [2024-07-15 17:25:26.315124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.348428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.348517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:15.747 [2024-07-15 17:25:26.348541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.261 ms 00:19:15.747 [2024-07-15 17:25:26.348560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.357049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:15.747 [2024-07-15 17:25:26.377996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.378086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:15.747 [2024-07-15 17:25:26.378127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.290 ms 00:19:15.747 [2024-07-15 17:25:26.378155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.378303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.378329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:15.747 [2024-07-15 17:25:26.378345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:15.747 [2024-07-15 17:25:26.378357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.378475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.378497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:15.747 [2024-07-15 17:25:26.378514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:15.747 [2024-07-15 17:25:26.378526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.378577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.378592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:15.747 [2024-07-15 17:25:26.378617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:15.747 [2024-07-15 17:25:26.378629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.378674] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:15.747 [2024-07-15 17:25:26.378690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.378704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:15.747 [2024-07-15 17:25:26.378726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:15.747 [2024-07-15 17:25:26.378740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.383147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.383212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:15.747 [2024-07-15 17:25:26.383231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.374 ms 00:19:15.747 [2024-07-15 17:25:26.383250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.383341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.747 [2024-07-15 17:25:26.383381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:15.747 [2024-07-15 17:25:26.383401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:15.747 [2024-07-15 17:25:26.383416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.747 [2024-07-15 17:25:26.384715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:15.747 [2024-07-15 17:25:26.385984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.891 ms, result 0 00:19:15.747 [2024-07-15 17:25:26.387101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:15.747 Some configs were skipped because the RPC state that can call them passed over. 00:19:15.747 17:25:26 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:16.006 [2024-07-15 17:25:26.672719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.006 [2024-07-15 17:25:26.672793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:16.006 [2024-07-15 17:25:26.672818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:19:16.006 [2024-07-15 17:25:26.672831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.006 [2024-07-15 17:25:26.672879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.946 ms, result 0 00:19:16.006 true 00:19:16.006 17:25:26 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:16.264 [2024-07-15 17:25:26.924464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.264 [2024-07-15 17:25:26.924544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:16.264 [2024-07-15 17:25:26.924565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:19:16.264 [2024-07-15 17:25:26.924580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.264 [2024-07-15 17:25:26.924630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.456 ms, result 0 00:19:16.264 true 00:19:16.264 17:25:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 92929 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 92929 ']' 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 92929 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92929 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.264 killing process with pid 92929 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92929' 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 92929 00:19:16.264 17:25:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 92929 00:19:16.522 [2024-07-15 17:25:27.137874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.137954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.522 [2024-07-15 17:25:27.137978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:16.522 [2024-07-15 17:25:27.138000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.138059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:16.522 [2024-07-15 17:25:27.138888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.138919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.522 [2024-07-15 17:25:27.138937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:19:16.522 [2024-07-15 17:25:27.138951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.139266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.139300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.522 [2024-07-15 17:25:27.139315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:16.522 [2024-07-15 17:25:27.139329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.143502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.143553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.522 [2024-07-15 17:25:27.143571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.147 ms 00:19:16.522 [2024-07-15 17:25:27.143591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.151071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.151127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.522 [2024-07-15 17:25:27.151144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.432 ms 00:19:16.522 [2024-07-15 17:25:27.151170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.152673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.152720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.522 [2024-07-15 17:25:27.152736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:19:16.522 [2024-07-15 17:25:27.152749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.157115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.157172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.522 [2024-07-15 17:25:27.157188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.316 ms 00:19:16.522 [2024-07-15 17:25:27.157205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.157355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.157407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.522 [2024-07-15 17:25:27.157425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:16.522 [2024-07-15 17:25:27.157439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.159503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.159548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:16.522 [2024-07-15 17:25:27.159564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:19:16.522 [2024-07-15 17:25:27.159580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.161047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.161088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:16.522 [2024-07-15 17:25:27.161130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:19:16.522 [2024-07-15 17:25:27.161144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.162350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.162414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.522 [2024-07-15 17:25:27.162430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:19:16.522 [2024-07-15 17:25:27.162443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.163749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.522 [2024-07-15 17:25:27.163792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.522 [2024-07-15 17:25:27.163807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:19:16.522 [2024-07-15 17:25:27.163820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.522 [2024-07-15 17:25:27.163862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.522 [2024-07-15 17:25:27.163889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.163985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.522 [2024-07-15 17:25:27.164065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.164995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.523 [2024-07-15 17:25:27.165255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.524 [2024-07-15 17:25:27.165268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.524 [2024-07-15 17:25:27.165291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.524 [2024-07-15 17:25:27.165303] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:16.524 [2024-07-15 17:25:27.165321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.524 [2024-07-15 17:25:27.165332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.524 [2024-07-15 17:25:27.165346] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.524 [2024-07-15 17:25:27.165373] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.524 [2024-07-15 17:25:27.165400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.524 [2024-07-15 17:25:27.165425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.524 [2024-07-15 17:25:27.165440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.524 [2024-07-15 17:25:27.165451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.524 [2024-07-15 17:25:27.165463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.524 [2024-07-15 17:25:27.165474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.524 [2024-07-15 17:25:27.165491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.524 [2024-07-15 17:25:27.165513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:19:16.524 [2024-07-15 17:25:27.165531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.167719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.524 [2024-07-15 17:25:27.167756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.524 [2024-07-15 17:25:27.167772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:19:16.524 [2024-07-15 17:25:27.167786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.167935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.524 [2024-07-15 17:25:27.167956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.524 [2024-07-15 17:25:27.167970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:16.524 [2024-07-15 17:25:27.167983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.176351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.176418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.524 [2024-07-15 17:25:27.176436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.176450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.176592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.176615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.524 [2024-07-15 17:25:27.176629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.176655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.176720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.176742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.524 [2024-07-15 17:25:27.176755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.176769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.176796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.176813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.524 [2024-07-15 17:25:27.176825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.176839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.191372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.191453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.524 [2024-07-15 17:25:27.191483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.191499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.201839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.201913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.524 [2024-07-15 17:25:27.201933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.201952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.524 [2024-07-15 17:25:27.202102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.524 [2024-07-15 17:25:27.202190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.524 [2024-07-15 17:25:27.202403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.524 [2024-07-15 17:25:27.202536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.524 [2024-07-15 17:25:27.202643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.524 [2024-07-15 17:25:27.202740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.524 [2024-07-15 17:25:27.202753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.524 [2024-07-15 17:25:27.202767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.524 [2024-07-15 17:25:27.202952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.044 ms, result 0 00:19:16.780 17:25:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:16.781 17:25:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:16.781 [2024-07-15 17:25:27.593726] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:16.781 [2024-07-15 17:25:27.593878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92965 ] 00:19:17.038 [2024-07-15 17:25:27.738313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:17.038 [2024-07-15 17:25:27.760624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.038 [2024-07-15 17:25:27.848384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.295 [2024-07-15 17:25:27.974228] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.295 [2024-07-15 17:25:27.974317] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.295 [2024-07-15 17:25:28.135485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.295 [2024-07-15 17:25:28.135535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.295 [2024-07-15 17:25:28.135556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.295 [2024-07-15 17:25:28.135569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.295 [2024-07-15 17:25:28.138428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.295 [2024-07-15 17:25:28.138477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.295 [2024-07-15 17:25:28.138494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.829 ms 00:19:17.295 [2024-07-15 17:25:28.138506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.295 [2024-07-15 17:25:28.138621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.295 [2024-07-15 17:25:28.138949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.295 [2024-07-15 17:25:28.138997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.295 [2024-07-15 17:25:28.139011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.295 [2024-07-15 17:25:28.139024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:19:17.295 [2024-07-15 17:25:28.139036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.295 [2024-07-15 17:25:28.141103] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.295 [2024-07-15 17:25:28.144130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.295 [2024-07-15 17:25:28.144173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.295 [2024-07-15 17:25:28.144190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:19:17.295 [2024-07-15 17:25:28.144203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.295 [2024-07-15 17:25:28.144305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.295 [2024-07-15 17:25:28.144327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.295 [2024-07-15 17:25:28.144345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:17.295 [2024-07-15 17:25:28.144377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.153290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.153341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.553 [2024-07-15 17:25:28.153376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.834 ms 00:19:17.553 [2024-07-15 17:25:28.153400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.153576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.153605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.553 [2024-07-15 17:25:28.153620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:17.553 [2024-07-15 17:25:28.153642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.153689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.153705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.553 [2024-07-15 17:25:28.153718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:17.553 [2024-07-15 17:25:28.153730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.153774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:17.553 [2024-07-15 17:25:28.155916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.155951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.553 [2024-07-15 17:25:28.155966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:19:17.553 [2024-07-15 17:25:28.155977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.156030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.156048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.553 [2024-07-15 17:25:28.156065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:17.553 [2024-07-15 17:25:28.156078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.156103] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.553 [2024-07-15 17:25:28.156133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:17.553 [2024-07-15 17:25:28.156180] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.553 [2024-07-15 17:25:28.156202] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:17.553 [2024-07-15 17:25:28.156310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.553 [2024-07-15 17:25:28.156326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.553 [2024-07-15 17:25:28.156341] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:17.553 [2024-07-15 17:25:28.156372] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.553 [2024-07-15 17:25:28.156389] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.553 [2024-07-15 17:25:28.156409] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:17.553 [2024-07-15 17:25:28.156437] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.553 [2024-07-15 17:25:28.156461] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.553 [2024-07-15 17:25:28.156473] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.553 [2024-07-15 17:25:28.156486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.156512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.553 [2024-07-15 17:25:28.156533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:17.553 [2024-07-15 17:25:28.156545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.156644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.553 [2024-07-15 17:25:28.156661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.553 [2024-07-15 17:25:28.156674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:17.553 [2024-07-15 17:25:28.156692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.553 [2024-07-15 17:25:28.156801] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.553 [2024-07-15 17:25:28.156834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.553 [2024-07-15 17:25:28.156848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.553 [2024-07-15 17:25:28.156860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.553 [2024-07-15 17:25:28.156872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.553 [2024-07-15 17:25:28.156883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.156894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:17.554 [2024-07-15 17:25:28.156904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.554 [2024-07-15 17:25:28.156914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:17.554 [2024-07-15 17:25:28.156929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.554 [2024-07-15 17:25:28.156941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.554 [2024-07-15 17:25:28.156951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:17.554 [2024-07-15 17:25:28.156961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.554 [2024-07-15 17:25:28.156985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.554 [2024-07-15 17:25:28.156996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:17.554 [2024-07-15 17:25:28.157006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.554 [2024-07-15 17:25:28.157027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.554 [2024-07-15 17:25:28.157061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.554 [2024-07-15 17:25:28.157121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.554 [2024-07-15 17:25:28.157164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.554 [2024-07-15 17:25:28.157195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.554 [2024-07-15 17:25:28.157226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.554 [2024-07-15 17:25:28.157247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.554 [2024-07-15 17:25:28.157257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:17.554 [2024-07-15 17:25:28.157267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.554 [2024-07-15 17:25:28.157278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.554 [2024-07-15 17:25:28.157288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:17.554 [2024-07-15 17:25:28.157306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.554 [2024-07-15 17:25:28.157330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:17.554 [2024-07-15 17:25:28.157340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157350] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.554 [2024-07-15 17:25:28.157387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.554 [2024-07-15 17:25:28.157401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.554 [2024-07-15 17:25:28.157424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.554 [2024-07-15 17:25:28.157435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.554 [2024-07-15 17:25:28.157446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.554 [2024-07-15 17:25:28.157457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.554 [2024-07-15 17:25:28.157469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.554 [2024-07-15 17:25:28.157481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.554 [2024-07-15 17:25:28.157493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.554 [2024-07-15 17:25:28.157512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:17.554 [2024-07-15 17:25:28.157547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:17.554 [2024-07-15 17:25:28.157562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:17.554 [2024-07-15 17:25:28.157574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:17.554 [2024-07-15 17:25:28.157586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:17.554 [2024-07-15 17:25:28.157597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:17.554 [2024-07-15 17:25:28.157620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:17.554 [2024-07-15 17:25:28.157632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:17.554 [2024-07-15 17:25:28.157643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:17.554 [2024-07-15 17:25:28.157655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:17.554 [2024-07-15 17:25:28.157712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.554 [2024-07-15 17:25:28.157724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.554 [2024-07-15 17:25:28.157758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.554 [2024-07-15 17:25:28.157782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.554 [2024-07-15 17:25:28.157803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.554 [2024-07-15 17:25:28.157816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.157827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.554 [2024-07-15 17:25:28.157840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:19:17.554 [2024-07-15 17:25:28.157851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.184725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.184800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.554 [2024-07-15 17:25:28.184828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.765 ms 00:19:17.554 [2024-07-15 17:25:28.184851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.185071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.185116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.554 [2024-07-15 17:25:28.185141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:17.554 [2024-07-15 17:25:28.185153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.199300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.199388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.554 [2024-07-15 17:25:28.199415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.108 ms 00:19:17.554 [2024-07-15 17:25:28.199429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.199560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.199579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.554 [2024-07-15 17:25:28.199600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.554 [2024-07-15 17:25:28.199613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.200171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.200205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.554 [2024-07-15 17:25:28.200232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:17.554 [2024-07-15 17:25:28.200249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.200445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.200474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.554 [2024-07-15 17:25:28.200498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:19:17.554 [2024-07-15 17:25:28.200518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.208743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.208788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.554 [2024-07-15 17:25:28.208805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.191 ms 00:19:17.554 [2024-07-15 17:25:28.208824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.554 [2024-07-15 17:25:28.212086] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:17.554 [2024-07-15 17:25:28.212132] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:17.554 [2024-07-15 17:25:28.212151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.554 [2024-07-15 17:25:28.212164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:17.554 [2024-07-15 17:25:28.212176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:19:17.555 [2024-07-15 17:25:28.212202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.228314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.228367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:17.555 [2024-07-15 17:25:28.228387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:19:17.555 [2024-07-15 17:25:28.228418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.230453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.230494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:17.555 [2024-07-15 17:25:28.230510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:19:17.555 [2024-07-15 17:25:28.230521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.232191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.232231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:17.555 [2024-07-15 17:25:28.232246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:19:17.555 [2024-07-15 17:25:28.232267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.232688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.232724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:17.555 [2024-07-15 17:25:28.232739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:19:17.555 [2024-07-15 17:25:28.232751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.255797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.255886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:17.555 [2024-07-15 17:25:28.255909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.985 ms 00:19:17.555 [2024-07-15 17:25:28.255940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.264240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:17.555 [2024-07-15 17:25:28.285339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.285415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:17.555 [2024-07-15 17:25:28.285437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.230 ms 00:19:17.555 [2024-07-15 17:25:28.285450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.285610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.285639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:17.555 [2024-07-15 17:25:28.285653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:17.555 [2024-07-15 17:25:28.285665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.285745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.285763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:17.555 [2024-07-15 17:25:28.285776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:17.555 [2024-07-15 17:25:28.285788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.285824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.285846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:17.555 [2024-07-15 17:25:28.285859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.555 [2024-07-15 17:25:28.285870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.285911] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:17.555 [2024-07-15 17:25:28.285930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.285944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:17.555 [2024-07-15 17:25:28.285958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:17.555 [2024-07-15 17:25:28.285971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.290293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.290339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:17.555 [2024-07-15 17:25:28.290402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.287 ms 00:19:17.555 [2024-07-15 17:25:28.290423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.290521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.555 [2024-07-15 17:25:28.290542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:17.555 [2024-07-15 17:25:28.290557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:17.555 [2024-07-15 17:25:28.290570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.555 [2024-07-15 17:25:28.291764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.555 [2024-07-15 17:25:28.292960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.958 ms, result 0 00:19:17.555 [2024-07-15 17:25:28.293837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:17.555 [2024-07-15 17:25:28.302124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.423  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 72/256 [MB] (23 MBps) Copying: 95/256 [MB] (22 MBps) Copying: 118/256 [MB] (23 MBps) Copying: 141/256 [MB] (22 MBps) Copying: 165/256 [MB] (23 MBps) Copying: 189/256 [MB] (24 MBps) Copying: 213/256 [MB] (23 MBps) Copying: 237/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-15 17:25:39.113146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.423 [2024-07-15 17:25:39.114897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.423 [2024-07-15 17:25:39.114944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.423 [2024-07-15 17:25:39.114964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.423 [2024-07-15 17:25:39.114978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.423 [2024-07-15 17:25:39.115013] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.424 [2024-07-15 17:25:39.115857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.115900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.424 [2024-07-15 17:25:39.115916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:19:28.424 [2024-07-15 17:25:39.115928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.116242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.116275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.424 [2024-07-15 17:25:39.116290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:28.424 [2024-07-15 17:25:39.116303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.119995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.120025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.424 [2024-07-15 17:25:39.120042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.669 ms 00:19:28.424 [2024-07-15 17:25:39.120055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.127407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.127461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.424 [2024-07-15 17:25:39.127486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.298 ms 00:19:28.424 [2024-07-15 17:25:39.127498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.129098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.129154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.424 [2024-07-15 17:25:39.129171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:19:28.424 [2024-07-15 17:25:39.129183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.133169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.133224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.424 [2024-07-15 17:25:39.133240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.939 ms 00:19:28.424 [2024-07-15 17:25:39.133253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.133402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.133433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.424 [2024-07-15 17:25:39.133448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:28.424 [2024-07-15 17:25:39.133461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.135641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.135694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.424 [2024-07-15 17:25:39.135709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:19:28.424 [2024-07-15 17:25:39.135721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.137456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.137493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.424 [2024-07-15 17:25:39.137508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:19:28.424 [2024-07-15 17:25:39.137519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.138838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.138877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.424 [2024-07-15 17:25:39.138891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:19:28.424 [2024-07-15 17:25:39.138904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.140089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.424 [2024-07-15 17:25:39.140128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.424 [2024-07-15 17:25:39.140143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:19:28.424 [2024-07-15 17:25:39.140154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.424 [2024-07-15 17:25:39.140219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.424 [2024-07-15 17:25:39.140245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.424 [2024-07-15 17:25:39.140675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.140993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.425 [2024-07-15 17:25:39.141522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.425 [2024-07-15 17:25:39.141534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:28.425 [2024-07-15 17:25:39.141546] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.425 [2024-07-15 17:25:39.141557] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.425 [2024-07-15 17:25:39.141569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.425 [2024-07-15 17:25:39.141589] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.426 [2024-07-15 17:25:39.141600] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.426 [2024-07-15 17:25:39.141621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.426 [2024-07-15 17:25:39.141633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.426 [2024-07-15 17:25:39.141643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.426 [2024-07-15 17:25:39.141653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.426 [2024-07-15 17:25:39.141665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.426 [2024-07-15 17:25:39.141689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.426 [2024-07-15 17:25:39.141702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:19:28.426 [2024-07-15 17:25:39.141715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.143930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.426 [2024-07-15 17:25:39.143975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.426 [2024-07-15 17:25:39.143999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:19:28.426 [2024-07-15 17:25:39.144012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.144175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.426 [2024-07-15 17:25:39.144204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.426 [2024-07-15 17:25:39.144219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:28.426 [2024-07-15 17:25:39.144231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.151769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.151837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.426 [2024-07-15 17:25:39.151872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.151884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.152010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.152029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.426 [2024-07-15 17:25:39.152043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.152054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.152117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.152136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.426 [2024-07-15 17:25:39.152157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.152169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.152196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.152211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.426 [2024-07-15 17:25:39.152224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.152236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.166656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.166744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.426 [2024-07-15 17:25:39.166774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.166787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.426 [2024-07-15 17:25:39.177192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.426 [2024-07-15 17:25:39.177327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.426 [2024-07-15 17:25:39.177440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.426 [2024-07-15 17:25:39.177596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.426 [2024-07-15 17:25:39.177712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.426 [2024-07-15 17:25:39.177820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.177900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.426 [2024-07-15 17:25:39.177931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.426 [2024-07-15 17:25:39.177946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.426 [2024-07-15 17:25:39.177957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.426 [2024-07-15 17:25:39.178135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.230 ms, result 0 00:19:28.685 00:19:28.685 00:19:28.685 17:25:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:28.685 17:25:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:29.251 17:25:40 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:29.508 [2024-07-15 17:25:40.153413] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:29.509 [2024-07-15 17:25:40.153653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93098 ] 00:19:29.509 [2024-07-15 17:25:40.308667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:29.509 [2024-07-15 17:25:40.328179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.766 [2024-07-15 17:25:40.437436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.766 [2024-07-15 17:25:40.567044] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.766 [2024-07-15 17:25:40.567153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.025 [2024-07-15 17:25:40.728938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.729033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.025 [2024-07-15 17:25:40.729055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:30.025 [2024-07-15 17:25:40.729080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.732067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.732120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.025 [2024-07-15 17:25:40.732139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:19:30.025 [2024-07-15 17:25:40.732151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.732285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.025 [2024-07-15 17:25:40.732629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.025 [2024-07-15 17:25:40.732671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.732697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.025 [2024-07-15 17:25:40.732712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:19:30.025 [2024-07-15 17:25:40.732724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.734916] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.025 [2024-07-15 17:25:40.738122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.738172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.025 [2024-07-15 17:25:40.738192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:19:30.025 [2024-07-15 17:25:40.738205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.738302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.738325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.025 [2024-07-15 17:25:40.738344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:30.025 [2024-07-15 17:25:40.738375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.747609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.747711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.025 [2024-07-15 17:25:40.747731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.161 ms 00:19:30.025 [2024-07-15 17:25:40.747743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.747989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.748032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.025 [2024-07-15 17:25:40.748057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:30.025 [2024-07-15 17:25:40.748078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.748139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.748157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.025 [2024-07-15 17:25:40.748182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:30.025 [2024-07-15 17:25:40.748194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.748236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:30.025 [2024-07-15 17:25:40.750497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.750542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.025 [2024-07-15 17:25:40.750559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:19:30.025 [2024-07-15 17:25:40.750571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.750636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.750654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.025 [2024-07-15 17:25:40.750672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:30.025 [2024-07-15 17:25:40.750684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.750718] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.025 [2024-07-15 17:25:40.750750] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.025 [2024-07-15 17:25:40.750816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.025 [2024-07-15 17:25:40.750844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:30.025 [2024-07-15 17:25:40.750950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.025 [2024-07-15 17:25:40.750967] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.025 [2024-07-15 17:25:40.750982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:30.025 [2024-07-15 17:25:40.750998] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751012] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751025] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:30.025 [2024-07-15 17:25:40.751041] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.025 [2024-07-15 17:25:40.751053] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.025 [2024-07-15 17:25:40.751064] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.025 [2024-07-15 17:25:40.751088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.751104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.025 [2024-07-15 17:25:40.751118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:19:30.025 [2024-07-15 17:25:40.751130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.751228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.025 [2024-07-15 17:25:40.751245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.025 [2024-07-15 17:25:40.751269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:30.025 [2024-07-15 17:25:40.751285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.025 [2024-07-15 17:25:40.751432] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.025 [2024-07-15 17:25:40.751462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.025 [2024-07-15 17:25:40.751476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.025 [2024-07-15 17:25:40.751512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.025 [2024-07-15 17:25:40.751545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.025 [2024-07-15 17:25:40.751572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.025 [2024-07-15 17:25:40.751583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:30.025 [2024-07-15 17:25:40.751593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.025 [2024-07-15 17:25:40.751617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.025 [2024-07-15 17:25:40.751628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:30.025 [2024-07-15 17:25:40.751639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.025 [2024-07-15 17:25:40.751660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.025 [2024-07-15 17:25:40.751692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.025 [2024-07-15 17:25:40.751727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.025 [2024-07-15 17:25:40.751770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.025 [2024-07-15 17:25:40.751802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.025 [2024-07-15 17:25:40.751823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.025 [2024-07-15 17:25:40.751833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:30.025 [2024-07-15 17:25:40.751843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.025 [2024-07-15 17:25:40.751854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.025 [2024-07-15 17:25:40.751864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:30.025 [2024-07-15 17:25:40.751875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.026 [2024-07-15 17:25:40.751885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.026 [2024-07-15 17:25:40.751896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:30.026 [2024-07-15 17:25:40.751906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.026 [2024-07-15 17:25:40.751916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.026 [2024-07-15 17:25:40.751930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:30.026 [2024-07-15 17:25:40.751941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.026 [2024-07-15 17:25:40.751951] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.026 [2024-07-15 17:25:40.751962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.026 [2024-07-15 17:25:40.751974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.026 [2024-07-15 17:25:40.751995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.026 [2024-07-15 17:25:40.752008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.026 [2024-07-15 17:25:40.752018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.026 [2024-07-15 17:25:40.752029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.026 [2024-07-15 17:25:40.752040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.026 [2024-07-15 17:25:40.752050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.026 [2024-07-15 17:25:40.752061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.026 [2024-07-15 17:25:40.752075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.026 [2024-07-15 17:25:40.752094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:30.026 [2024-07-15 17:25:40.752120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:30.026 [2024-07-15 17:25:40.752135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:30.026 [2024-07-15 17:25:40.752147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:30.026 [2024-07-15 17:25:40.752159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:30.026 [2024-07-15 17:25:40.752171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:30.026 [2024-07-15 17:25:40.752182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:30.026 [2024-07-15 17:25:40.752194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:30.026 [2024-07-15 17:25:40.752206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:30.026 [2024-07-15 17:25:40.752217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:30.026 [2024-07-15 17:25:40.752275] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.026 [2024-07-15 17:25:40.752288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.026 [2024-07-15 17:25:40.752322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.026 [2024-07-15 17:25:40.752339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.026 [2024-07-15 17:25:40.752351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.026 [2024-07-15 17:25:40.752395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.752409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.026 [2024-07-15 17:25:40.752422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:19:30.026 [2024-07-15 17:25:40.752433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.779064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.779161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.026 [2024-07-15 17:25:40.779196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.501 ms 00:19:30.026 [2024-07-15 17:25:40.779213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.779540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.779580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.026 [2024-07-15 17:25:40.779601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:30.026 [2024-07-15 17:25:40.779617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.793482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.793563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.026 [2024-07-15 17:25:40.793591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:19:30.026 [2024-07-15 17:25:40.793605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.793746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.793778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.026 [2024-07-15 17:25:40.793802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.026 [2024-07-15 17:25:40.793815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.794411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.794441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.026 [2024-07-15 17:25:40.794457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:19:30.026 [2024-07-15 17:25:40.794475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.794654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.794680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.026 [2024-07-15 17:25:40.794693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:30.026 [2024-07-15 17:25:40.794705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.803101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.803171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.026 [2024-07-15 17:25:40.803197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.361 ms 00:19:30.026 [2024-07-15 17:25:40.803216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.806434] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:30.026 [2024-07-15 17:25:40.806492] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.026 [2024-07-15 17:25:40.806512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.806524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.026 [2024-07-15 17:25:40.806538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:19:30.026 [2024-07-15 17:25:40.806565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.822812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.822949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.026 [2024-07-15 17:25:40.822973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.177 ms 00:19:30.026 [2024-07-15 17:25:40.822994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.826319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.826389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.026 [2024-07-15 17:25:40.826408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:19:30.026 [2024-07-15 17:25:40.826420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.828160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.828200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.026 [2024-07-15 17:25:40.828216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:19:30.026 [2024-07-15 17:25:40.828237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.828737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.828774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.026 [2024-07-15 17:25:40.828796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:19:30.026 [2024-07-15 17:25:40.828808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.854212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.026 [2024-07-15 17:25:40.854301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.026 [2024-07-15 17:25:40.854324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.357 ms 00:19:30.026 [2024-07-15 17:25:40.854346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.026 [2024-07-15 17:25:40.863150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.285 [2024-07-15 17:25:40.885928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.886010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.285 [2024-07-15 17:25:40.886056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.393 ms 00:19:30.285 [2024-07-15 17:25:40.886080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.886255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.886278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.285 [2024-07-15 17:25:40.886292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:30.285 [2024-07-15 17:25:40.886304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.886402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.886424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.285 [2024-07-15 17:25:40.886451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:30.285 [2024-07-15 17:25:40.886464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.886504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.886535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.285 [2024-07-15 17:25:40.886559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:30.285 [2024-07-15 17:25:40.886572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.886620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.285 [2024-07-15 17:25:40.886642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.886656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.285 [2024-07-15 17:25:40.886671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:30.285 [2024-07-15 17:25:40.886684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.891451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.891500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.285 [2024-07-15 17:25:40.891527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:19:30.285 [2024-07-15 17:25:40.891540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.891641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:40.891663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.285 [2024-07-15 17:25:40.891677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:30.285 [2024-07-15 17:25:40.891690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:40.892918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.285 [2024-07-15 17:25:40.894202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.626 ms, result 0 00:19:30.285 [2024-07-15 17:25:40.895044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.285 [2024-07-15 17:25:40.903001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.285  Copying: 4096/4096 [kB] (average 23 MBps)[2024-07-15 17:25:41.078185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.285 [2024-07-15 17:25:41.079970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.080018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.285 [2024-07-15 17:25:41.080047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.285 [2024-07-15 17:25:41.080074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.080108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.285 [2024-07-15 17:25:41.080954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.080986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.285 [2024-07-15 17:25:41.081002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:19:30.285 [2024-07-15 17:25:41.081014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.083042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.083086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.285 [2024-07-15 17:25:41.083117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:19:30.285 [2024-07-15 17:25:41.083129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.087107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.087154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.285 [2024-07-15 17:25:41.087171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:19:30.285 [2024-07-15 17:25:41.087183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.094749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.094796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.285 [2024-07-15 17:25:41.094812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.508 ms 00:19:30.285 [2024-07-15 17:25:41.094824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.096570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.096612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.285 [2024-07-15 17:25:41.096628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:19:30.285 [2024-07-15 17:25:41.096640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.101023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.101066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.285 [2024-07-15 17:25:41.101083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:19:30.285 [2024-07-15 17:25:41.101095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.101266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.101287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.285 [2024-07-15 17:25:41.101302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:30.285 [2024-07-15 17:25:41.101313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.103441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.103493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:30.285 [2024-07-15 17:25:41.103512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:19:30.285 [2024-07-15 17:25:41.103523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.105246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.105286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:30.285 [2024-07-15 17:25:41.105301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:19:30.285 [2024-07-15 17:25:41.105312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.106527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.106564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.285 [2024-07-15 17:25:41.106579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:19:30.285 [2024-07-15 17:25:41.106590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.107823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-15 17:25:41.107861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.285 [2024-07-15 17:25:41.107876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:19:30.285 [2024-07-15 17:25:41.107887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-15 17:25:41.107929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.285 [2024-07-15 17:25:41.107955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.285 [2024-07-15 17:25:41.107971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.285 [2024-07-15 17:25:41.107983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.107996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.108993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.286 [2024-07-15 17:25:41.109080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.287 [2024-07-15 17:25:41.109229] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.287 [2024-07-15 17:25:41.109241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:30.287 [2024-07-15 17:25:41.109254] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.287 [2024-07-15 17:25:41.109285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.287 [2024-07-15 17:25:41.109297] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.287 [2024-07-15 17:25:41.109315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.287 [2024-07-15 17:25:41.109327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.287 [2024-07-15 17:25:41.109344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.287 [2024-07-15 17:25:41.109355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.287 [2024-07-15 17:25:41.109380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.287 [2024-07-15 17:25:41.109392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.287 [2024-07-15 17:25:41.109403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.287 [2024-07-15 17:25:41.109428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.287 [2024-07-15 17:25:41.109442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:19:30.287 [2024-07-15 17:25:41.109454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.111643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.287 [2024-07-15 17:25:41.111682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:30.287 [2024-07-15 17:25:41.111698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:19:30.287 [2024-07-15 17:25:41.111710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.111845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.287 [2024-07-15 17:25:41.111874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:30.287 [2024-07-15 17:25:41.111887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:30.287 [2024-07-15 17:25:41.111899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.119382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.287 [2024-07-15 17:25:41.119436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.287 [2024-07-15 17:25:41.119453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.287 [2024-07-15 17:25:41.119465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.119594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.287 [2024-07-15 17:25:41.119613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.287 [2024-07-15 17:25:41.119639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.287 [2024-07-15 17:25:41.119650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.119734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.287 [2024-07-15 17:25:41.119754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.287 [2024-07-15 17:25:41.119773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.287 [2024-07-15 17:25:41.119785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.119811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.287 [2024-07-15 17:25:41.119826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.287 [2024-07-15 17:25:41.119838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.287 [2024-07-15 17:25:41.119850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.287 [2024-07-15 17:25:41.134748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.287 [2024-07-15 17:25:41.134846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.287 [2024-07-15 17:25:41.134877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.287 [2024-07-15 17:25:41.134892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.544 [2024-07-15 17:25:41.145393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.544 [2024-07-15 17:25:41.145470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.544 [2024-07-15 17:25:41.145491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.544 [2024-07-15 17:25:41.145504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.544 [2024-07-15 17:25:41.145606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.544 [2024-07-15 17:25:41.145625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.544 [2024-07-15 17:25:41.145638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.544 [2024-07-15 17:25:41.145661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.544 [2024-07-15 17:25:41.145700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.544 [2024-07-15 17:25:41.145717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.544 [2024-07-15 17:25:41.145730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.544 [2024-07-15 17:25:41.145753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.544 [2024-07-15 17:25:41.145851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.544 [2024-07-15 17:25:41.145872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.544 [2024-07-15 17:25:41.145898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.544 [2024-07-15 17:25:41.145913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.544 [2024-07-15 17:25:41.145967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.544 [2024-07-15 17:25:41.145986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.545 [2024-07-15 17:25:41.146010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.545 [2024-07-15 17:25:41.146022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-15 17:25:41.146073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.545 [2024-07-15 17:25:41.146090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.545 [2024-07-15 17:25:41.146103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.545 [2024-07-15 17:25:41.146115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-15 17:25:41.146190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.545 [2024-07-15 17:25:41.146212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.545 [2024-07-15 17:25:41.146225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.545 [2024-07-15 17:25:41.146236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-15 17:25:41.146433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.409 ms, result 0 00:19:30.801 00:19:30.801 00:19:30.801 17:25:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=93112 00:19:30.801 17:25:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:30.801 17:25:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 93112 00:19:30.801 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 93112 ']' 00:19:30.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.802 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.802 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.802 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.802 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.802 17:25:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:30.802 [2024-07-15 17:25:41.566150] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:30.802 [2024-07-15 17:25:41.566320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93112 ] 00:19:31.059 [2024-07-15 17:25:41.710110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:31.059 [2024-07-15 17:25:41.728699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.059 [2024-07-15 17:25:41.830223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.656 17:25:42 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.656 17:25:42 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:31.656 17:25:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:31.915 [2024-07-15 17:25:42.695762] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.915 [2024-07-15 17:25:42.695868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.175 [2024-07-15 17:25:42.873626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.873714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.175 [2024-07-15 17:25:42.873737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.175 [2024-07-15 17:25:42.873753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.876597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.876648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.175 [2024-07-15 17:25:42.876667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:19:32.175 [2024-07-15 17:25:42.876682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.876808] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.175 [2024-07-15 17:25:42.877166] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.175 [2024-07-15 17:25:42.877219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.877238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.175 [2024-07-15 17:25:42.877252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:32.175 [2024-07-15 17:25:42.877266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.879428] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.175 [2024-07-15 17:25:42.882340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.882398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.175 [2024-07-15 17:25:42.882421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:19:32.175 [2024-07-15 17:25:42.882435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.882552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.882573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.175 [2024-07-15 17:25:42.882593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:32.175 [2024-07-15 17:25:42.882608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.891503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.891556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.175 [2024-07-15 17:25:42.891579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.810 ms 00:19:32.175 [2024-07-15 17:25:42.891593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.891808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.891846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.175 [2024-07-15 17:25:42.891884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:32.175 [2024-07-15 17:25:42.891897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.891962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.891981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.175 [2024-07-15 17:25:42.891997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:32.175 [2024-07-15 17:25:42.892009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.892063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.175 [2024-07-15 17:25:42.894244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.894299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.175 [2024-07-15 17:25:42.894316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:19:32.175 [2024-07-15 17:25:42.894332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.894419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.894442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.175 [2024-07-15 17:25:42.894465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:32.175 [2024-07-15 17:25:42.894481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.894512] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.175 [2024-07-15 17:25:42.894547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.175 [2024-07-15 17:25:42.894613] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.175 [2024-07-15 17:25:42.894645] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:32.175 [2024-07-15 17:25:42.894753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.175 [2024-07-15 17:25:42.894772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.175 [2024-07-15 17:25:42.894787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:32.175 [2024-07-15 17:25:42.894815] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.175 [2024-07-15 17:25:42.894838] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.175 [2024-07-15 17:25:42.894857] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.175 [2024-07-15 17:25:42.894870] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.175 [2024-07-15 17:25:42.894887] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.175 [2024-07-15 17:25:42.894899] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.175 [2024-07-15 17:25:42.894922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.894934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.175 [2024-07-15 17:25:42.894948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:19:32.175 [2024-07-15 17:25:42.894960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.895061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.175 [2024-07-15 17:25:42.895091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.175 [2024-07-15 17:25:42.895110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:32.175 [2024-07-15 17:25:42.895122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.175 [2024-07-15 17:25:42.895275] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.175 [2024-07-15 17:25:42.895303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.175 [2024-07-15 17:25:42.895319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.175 [2024-07-15 17:25:42.895340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.175 [2024-07-15 17:25:42.895373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.175 [2024-07-15 17:25:42.895388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.175 [2024-07-15 17:25:42.895410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.175 [2024-07-15 17:25:42.895431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.175 [2024-07-15 17:25:42.895454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.175 [2024-07-15 17:25:42.895466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.176 [2024-07-15 17:25:42.895480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.176 [2024-07-15 17:25:42.895491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.176 [2024-07-15 17:25:42.895504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.176 [2024-07-15 17:25:42.895515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.176 [2024-07-15 17:25:42.895529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.176 [2024-07-15 17:25:42.895540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.176 [2024-07-15 17:25:42.895564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.176 [2024-07-15 17:25:42.895607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.176 [2024-07-15 17:25:42.895657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.176 [2024-07-15 17:25:42.895698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.176 [2024-07-15 17:25:42.895734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.176 [2024-07-15 17:25:42.895773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.176 [2024-07-15 17:25:42.895797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.176 [2024-07-15 17:25:42.895809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.176 [2024-07-15 17:25:42.895824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.176 [2024-07-15 17:25:42.895836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.176 [2024-07-15 17:25:42.895849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.176 [2024-07-15 17:25:42.895859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.176 [2024-07-15 17:25:42.895884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.176 [2024-07-15 17:25:42.895899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895910] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.176 [2024-07-15 17:25:42.895924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.176 [2024-07-15 17:25:42.895936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.176 [2024-07-15 17:25:42.895950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.176 [2024-07-15 17:25:42.895963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.176 [2024-07-15 17:25:42.895976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.176 [2024-07-15 17:25:42.895987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.176 [2024-07-15 17:25:42.896001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.176 [2024-07-15 17:25:42.896012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.176 [2024-07-15 17:25:42.896028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.176 [2024-07-15 17:25:42.896041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.176 [2024-07-15 17:25:42.896058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.176 [2024-07-15 17:25:42.896088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.176 [2024-07-15 17:25:42.896102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.176 [2024-07-15 17:25:42.896116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.176 [2024-07-15 17:25:42.896129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.176 [2024-07-15 17:25:42.896144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.176 [2024-07-15 17:25:42.896156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.176 [2024-07-15 17:25:42.896170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.176 [2024-07-15 17:25:42.896182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.176 [2024-07-15 17:25:42.896195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.176 [2024-07-15 17:25:42.896266] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.176 [2024-07-15 17:25:42.896281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.176 [2024-07-15 17:25:42.896308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.176 [2024-07-15 17:25:42.896320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.176 [2024-07-15 17:25:42.896334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.176 [2024-07-15 17:25:42.896348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.896383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.176 [2024-07-15 17:25:42.896399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:19:32.176 [2024-07-15 17:25:42.896413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.912783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.912857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.176 [2024-07-15 17:25:42.912880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.252 ms 00:19:32.176 [2024-07-15 17:25:42.912900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.913101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.913158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.176 [2024-07-15 17:25:42.913174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:32.176 [2024-07-15 17:25:42.913189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.927470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.927541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.176 [2024-07-15 17:25:42.927564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.245 ms 00:19:32.176 [2024-07-15 17:25:42.927584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.927722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.927755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.176 [2024-07-15 17:25:42.927772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.176 [2024-07-15 17:25:42.927787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.928385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.928422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.176 [2024-07-15 17:25:42.928449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:19:32.176 [2024-07-15 17:25:42.928464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.928642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.928686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.176 [2024-07-15 17:25:42.928702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:32.176 [2024-07-15 17:25:42.928716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.938273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.938342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.176 [2024-07-15 17:25:42.938377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.523 ms 00:19:32.176 [2024-07-15 17:25:42.938396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.941719] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:32.176 [2024-07-15 17:25:42.941770] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.176 [2024-07-15 17:25:42.941802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.941819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.176 [2024-07-15 17:25:42.941834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:19:32.176 [2024-07-15 17:25:42.941848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.958122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.958199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.176 [2024-07-15 17:25:42.958224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.131 ms 00:19:32.176 [2024-07-15 17:25:42.958242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.960623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.960672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.176 [2024-07-15 17:25:42.960689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:19:32.176 [2024-07-15 17:25:42.960715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.962473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.962523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.176 [2024-07-15 17:25:42.962540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:19:32.176 [2024-07-15 17:25:42.962554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.963036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.963075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.176 [2024-07-15 17:25:42.963091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:32.176 [2024-07-15 17:25:42.963106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:42.998508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:42.998592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.176 [2024-07-15 17:25:42.998615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.368 ms 00:19:32.176 [2024-07-15 17:25:42.998646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:43.007084] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.176 [2024-07-15 17:25:43.028641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:43.028724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.176 [2024-07-15 17:25:43.028750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.810 ms 00:19:32.176 [2024-07-15 17:25:43.028764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:43.028916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:43.028941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.176 [2024-07-15 17:25:43.028958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.176 [2024-07-15 17:25:43.028985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:43.029074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:43.029092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.176 [2024-07-15 17:25:43.029109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:32.176 [2024-07-15 17:25:43.029135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:43.029177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:43.029194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.176 [2024-07-15 17:25:43.029220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:32.176 [2024-07-15 17:25:43.029233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.176 [2024-07-15 17:25:43.029279] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.176 [2024-07-15 17:25:43.029296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.176 [2024-07-15 17:25:43.029311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.176 [2024-07-15 17:25:43.029323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:32.176 [2024-07-15 17:25:43.029337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.492 [2024-07-15 17:25:43.033702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.492 [2024-07-15 17:25:43.033758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.492 [2024-07-15 17:25:43.033777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:19:32.492 [2024-07-15 17:25:43.033796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.492 [2024-07-15 17:25:43.033902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.492 [2024-07-15 17:25:43.033934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.492 [2024-07-15 17:25:43.033953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:32.492 [2024-07-15 17:25:43.033967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.492 [2024-07-15 17:25:43.035207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.492 [2024-07-15 17:25:43.036457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.195 ms, result 0 00:19:32.492 [2024-07-15 17:25:43.038020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.492 Some configs were skipped because the RPC state that can call them passed over. 00:19:32.492 17:25:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:32.492 [2024-07-15 17:25:43.315294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.492 [2024-07-15 17:25:43.315387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:32.492 [2024-07-15 17:25:43.315416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:19:32.492 [2024-07-15 17:25:43.315430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.492 [2024-07-15 17:25:43.315484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.951 ms, result 0 00:19:32.492 true 00:19:32.761 17:25:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:33.018 [2024-07-15 17:25:43.651144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.018 [2024-07-15 17:25:43.651234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:33.018 [2024-07-15 17:25:43.651257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:19:33.018 [2024-07-15 17:25:43.651273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.018 [2024-07-15 17:25:43.651324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.395 ms, result 0 00:19:33.018 true 00:19:33.018 17:25:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 93112 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 93112 ']' 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 93112 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93112 00:19:33.018 killing process with pid 93112 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93112' 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 93112 00:19:33.018 17:25:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 93112 00:19:33.277 [2024-07-15 17:25:43.873925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.874005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:33.277 [2024-07-15 17:25:43.874032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:33.277 [2024-07-15 17:25:43.874046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.874086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:33.277 [2024-07-15 17:25:43.874915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.874944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:33.277 [2024-07-15 17:25:43.874962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:19:33.277 [2024-07-15 17:25:43.874976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.875319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.875349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:33.277 [2024-07-15 17:25:43.875377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:33.277 [2024-07-15 17:25:43.875392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.879545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.879599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:33.277 [2024-07-15 17:25:43.879618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.114 ms 00:19:33.277 [2024-07-15 17:25:43.879638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.887034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.887096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:33.277 [2024-07-15 17:25:43.887114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.346 ms 00:19:33.277 [2024-07-15 17:25:43.887132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.889186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.889242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:33.277 [2024-07-15 17:25:43.889259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:19:33.277 [2024-07-15 17:25:43.889274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.893754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.893810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:33.277 [2024-07-15 17:25:43.893848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.428 ms 00:19:33.277 [2024-07-15 17:25:43.893867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.894027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.894051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:33.277 [2024-07-15 17:25:43.894066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:33.277 [2024-07-15 17:25:43.894080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.896318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.896401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:33.277 [2024-07-15 17:25:43.896435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:19:33.277 [2024-07-15 17:25:43.896483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.898519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.898569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:33.277 [2024-07-15 17:25:43.898586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:19:33.277 [2024-07-15 17:25:43.898609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.899948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.899995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:33.277 [2024-07-15 17:25:43.900012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:19:33.277 [2024-07-15 17:25:43.900026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.901230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.277 [2024-07-15 17:25:43.901277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:33.277 [2024-07-15 17:25:43.901293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:19:33.277 [2024-07-15 17:25:43.901307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.277 [2024-07-15 17:25:43.901352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:33.277 [2024-07-15 17:25:43.901414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.901985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:33.277 [2024-07-15 17:25:43.902797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.902999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:33.278 [2024-07-15 17:25:43.903685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:33.278 [2024-07-15 17:25:43.903698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:33.278 [2024-07-15 17:25:43.903717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:33.278 [2024-07-15 17:25:43.903728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:33.278 [2024-07-15 17:25:43.903741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:33.278 [2024-07-15 17:25:43.903753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:33.278 [2024-07-15 17:25:43.903768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:33.278 [2024-07-15 17:25:43.903779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:33.278 [2024-07-15 17:25:43.903793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:33.278 [2024-07-15 17:25:43.903803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:33.278 [2024-07-15 17:25:43.903816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:33.278 [2024-07-15 17:25:43.903828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.278 [2024-07-15 17:25:43.903855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:33.278 [2024-07-15 17:25:43.903868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:19:33.278 [2024-07-15 17:25:43.903887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.278 [2024-07-15 17:25:43.906179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.278 [2024-07-15 17:25:43.906220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:33.278 [2024-07-15 17:25:43.906237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:19:33.278 [2024-07-15 17:25:43.906253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.278 [2024-07-15 17:25:43.906418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.278 [2024-07-15 17:25:43.906446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:33.278 [2024-07-15 17:25:43.906480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:33.278 [2024-07-15 17:25:43.906496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.278 [2024-07-15 17:25:43.915253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.278 [2024-07-15 17:25:43.915329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.278 [2024-07-15 17:25:43.915348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.278 [2024-07-15 17:25:43.915385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.278 [2024-07-15 17:25:43.915547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.278 [2024-07-15 17:25:43.915572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.278 [2024-07-15 17:25:43.915587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.278 [2024-07-15 17:25:43.915605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.278 [2024-07-15 17:25:43.915679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.915702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.279 [2024-07-15 17:25:43.915716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.915730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.915769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.915787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.279 [2024-07-15 17:25:43.915800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.915814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.934567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.934660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.279 [2024-07-15 17:25:43.934690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.934715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.945678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.945782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.279 [2024-07-15 17:25:43.945805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.945847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.945967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.945991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.279 [2024-07-15 17:25:43.946005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.946072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.946091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.279 [2024-07-15 17:25:43.946104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.946221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.946265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.279 [2024-07-15 17:25:43.946278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.946346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.946402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:33.279 [2024-07-15 17:25:43.946439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.946544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.946575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.279 [2024-07-15 17:25:43.946605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.946726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.279 [2024-07-15 17:25:43.946764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.279 [2024-07-15 17:25:43.946804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.279 [2024-07-15 17:25:43.946833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.279 [2024-07-15 17:25:43.947059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.105 ms, result 0 00:19:33.537 17:25:44 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:33.537 [2024-07-15 17:25:44.344415] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:33.537 [2024-07-15 17:25:44.344658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93159 ] 00:19:33.795 [2024-07-15 17:25:44.500455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:33.795 [2024-07-15 17:25:44.522087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.795 [2024-07-15 17:25:44.623719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.053 [2024-07-15 17:25:44.753825] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.053 [2024-07-15 17:25:44.753924] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.312 [2024-07-15 17:25:44.916097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.916183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.312 [2024-07-15 17:25:44.916207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:34.312 [2024-07-15 17:25:44.916233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.919423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.919485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.312 [2024-07-15 17:25:44.919518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:19:34.312 [2024-07-15 17:25:44.919541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.919736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.312 [2024-07-15 17:25:44.920092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.312 [2024-07-15 17:25:44.920120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.920144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.312 [2024-07-15 17:25:44.920158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:34.312 [2024-07-15 17:25:44.920181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.922498] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:34.312 [2024-07-15 17:25:44.925838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.926052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:34.312 [2024-07-15 17:25:44.926183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:19:34.312 [2024-07-15 17:25:44.926236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.926388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.926453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:34.312 [2024-07-15 17:25:44.926503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:34.312 [2024-07-15 17:25:44.926635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.936089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.936377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.312 [2024-07-15 17:25:44.936411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.262 ms 00:19:34.312 [2024-07-15 17:25:44.936426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.936644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.936676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.312 [2024-07-15 17:25:44.936693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:34.312 [2024-07-15 17:25:44.936705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.936756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.936773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.312 [2024-07-15 17:25:44.936787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:34.312 [2024-07-15 17:25:44.936800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.312 [2024-07-15 17:25:44.936851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:34.312 [2024-07-15 17:25:44.939272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.312 [2024-07-15 17:25:44.939327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.313 [2024-07-15 17:25:44.939346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:19:34.313 [2024-07-15 17:25:44.939500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.313 [2024-07-15 17:25:44.939589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.313 [2024-07-15 17:25:44.939616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.313 [2024-07-15 17:25:44.939637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:34.313 [2024-07-15 17:25:44.939650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.313 [2024-07-15 17:25:44.939685] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:34.313 [2024-07-15 17:25:44.939717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:34.313 [2024-07-15 17:25:44.939788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:34.313 [2024-07-15 17:25:44.939812] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:34.313 [2024-07-15 17:25:44.939917] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.313 [2024-07-15 17:25:44.939934] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.313 [2024-07-15 17:25:44.939957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:34.313 [2024-07-15 17:25:44.939984] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940000] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940014] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:34.313 [2024-07-15 17:25:44.940031] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.313 [2024-07-15 17:25:44.940043] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.313 [2024-07-15 17:25:44.940055] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.313 [2024-07-15 17:25:44.940077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.313 [2024-07-15 17:25:44.940094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.313 [2024-07-15 17:25:44.940108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:19:34.313 [2024-07-15 17:25:44.940121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.313 [2024-07-15 17:25:44.940218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.313 [2024-07-15 17:25:44.940233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.313 [2024-07-15 17:25:44.940246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:34.313 [2024-07-15 17:25:44.940264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.313 [2024-07-15 17:25:44.940406] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.313 [2024-07-15 17:25:44.940427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.313 [2024-07-15 17:25:44.940441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.313 [2024-07-15 17:25:44.940495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.313 [2024-07-15 17:25:44.940531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.313 [2024-07-15 17:25:44.940561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.313 [2024-07-15 17:25:44.940573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:34.313 [2024-07-15 17:25:44.940584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.313 [2024-07-15 17:25:44.940608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.313 [2024-07-15 17:25:44.940620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:34.313 [2024-07-15 17:25:44.940632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.313 [2024-07-15 17:25:44.940655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.313 [2024-07-15 17:25:44.940690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.313 [2024-07-15 17:25:44.940725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.313 [2024-07-15 17:25:44.940769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.313 [2024-07-15 17:25:44.940804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.313 [2024-07-15 17:25:44.940827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.313 [2024-07-15 17:25:44.940839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.313 [2024-07-15 17:25:44.940862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.313 [2024-07-15 17:25:44.940873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:34.313 [2024-07-15 17:25:44.940884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.313 [2024-07-15 17:25:44.940897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.313 [2024-07-15 17:25:44.940909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:34.313 [2024-07-15 17:25:44.940921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.313 [2024-07-15 17:25:44.940948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:34.313 [2024-07-15 17:25:44.940961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.940972] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.313 [2024-07-15 17:25:44.940985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.313 [2024-07-15 17:25:44.940998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.313 [2024-07-15 17:25:44.941011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.313 [2024-07-15 17:25:44.941024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.313 [2024-07-15 17:25:44.941036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.313 [2024-07-15 17:25:44.941048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.313 [2024-07-15 17:25:44.941060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.313 [2024-07-15 17:25:44.941071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.313 [2024-07-15 17:25:44.941083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.313 [2024-07-15 17:25:44.941096] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.313 [2024-07-15 17:25:44.941116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:34.313 [2024-07-15 17:25:44.941160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:34.313 [2024-07-15 17:25:44.941177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:34.313 [2024-07-15 17:25:44.941190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:34.313 [2024-07-15 17:25:44.941203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:34.313 [2024-07-15 17:25:44.941215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:34.313 [2024-07-15 17:25:44.941227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:34.313 [2024-07-15 17:25:44.941239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:34.313 [2024-07-15 17:25:44.941251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:34.313 [2024-07-15 17:25:44.941263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:34.313 [2024-07-15 17:25:44.941325] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.313 [2024-07-15 17:25:44.941338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.313 [2024-07-15 17:25:44.941377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.313 [2024-07-15 17:25:44.941396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.313 [2024-07-15 17:25:44.941410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.314 [2024-07-15 17:25:44.941423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.941436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.314 [2024-07-15 17:25:44.941449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:34.314 [2024-07-15 17:25:44.941473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.968456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.968844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.314 [2024-07-15 17:25:44.969028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:19:34.314 [2024-07-15 17:25:44.969283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.969648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.969844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.314 [2024-07-15 17:25:44.970065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:34.314 [2024-07-15 17:25:44.970235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.984556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.984884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.314 [2024-07-15 17:25:44.985022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.088 ms 00:19:34.314 [2024-07-15 17:25:44.985209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.985430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.985537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.314 [2024-07-15 17:25:44.985665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:34.314 [2024-07-15 17:25:44.985728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.986449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.986592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.314 [2024-07-15 17:25:44.986706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:19:34.314 [2024-07-15 17:25:44.986820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.987052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.987118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.314 [2024-07-15 17:25:44.987497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:34.314 [2024-07-15 17:25:44.987553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:44.996553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:44.996858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.314 [2024-07-15 17:25:44.997001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.696 ms 00:19:34.314 [2024-07-15 17:25:44.997066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.000528] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:34.314 [2024-07-15 17:25:45.000723] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:34.314 [2024-07-15 17:25:45.000861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.001093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:34.314 [2024-07-15 17:25:45.001171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:19:34.314 [2024-07-15 17:25:45.001232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.017684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.017771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:34.314 [2024-07-15 17:25:45.017795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.349 ms 00:19:34.314 [2024-07-15 17:25:45.017825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.021294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.021346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:34.314 [2024-07-15 17:25:45.021381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:19:34.314 [2024-07-15 17:25:45.021395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.023142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.023185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:34.314 [2024-07-15 17:25:45.023203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:19:34.314 [2024-07-15 17:25:45.023226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.023754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.023785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.314 [2024-07-15 17:25:45.023801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:19:34.314 [2024-07-15 17:25:45.023819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.048570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.048672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:34.314 [2024-07-15 17:25:45.048697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.695 ms 00:19:34.314 [2024-07-15 17:25:45.048719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.057702] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:34.314 [2024-07-15 17:25:45.080530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.080648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:34.314 [2024-07-15 17:25:45.080671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.606 ms 00:19:34.314 [2024-07-15 17:25:45.080685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.080857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.080880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:34.314 [2024-07-15 17:25:45.080894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:34.314 [2024-07-15 17:25:45.080917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.080999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.081018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:34.314 [2024-07-15 17:25:45.081033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:34.314 [2024-07-15 17:25:45.081046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.081093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.081115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.314 [2024-07-15 17:25:45.081149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:34.314 [2024-07-15 17:25:45.081162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.081210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:34.314 [2024-07-15 17:25:45.081229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.081241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:34.314 [2024-07-15 17:25:45.081255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:34.314 [2024-07-15 17:25:45.081267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.086256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.086338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.314 [2024-07-15 17:25:45.086387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:19:34.314 [2024-07-15 17:25:45.086403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.086519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.314 [2024-07-15 17:25:45.086540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.314 [2024-07-15 17:25:45.086568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:34.314 [2024-07-15 17:25:45.086581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.314 [2024-07-15 17:25:45.087852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.314 [2024-07-15 17:25:45.089423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.391 ms, result 0 00:19:34.314 [2024-07-15 17:25:45.090241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.314 [2024-07-15 17:25:45.098226] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.439  Copying: 25/256 [MB] (25 MBps) Copying: 49/256 [MB] (24 MBps) Copying: 73/256 [MB] (24 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 122/256 [MB] (24 MBps) Copying: 146/256 [MB] (23 MBps) Copying: 169/256 [MB] (23 MBps) Copying: 193/256 [MB] (23 MBps) Copying: 216/256 [MB] (22 MBps) Copying: 237/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-15 17:25:56.111215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.439 [2024-07-15 17:25:56.113860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.114040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:45.439 [2024-07-15 17:25:56.114166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.439 [2024-07-15 17:25:56.114764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.114849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:45.439 [2024-07-15 17:25:56.116260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.116415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:45.439 [2024-07-15 17:25:56.116580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:19:45.439 [2024-07-15 17:25:56.116633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.117082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.117236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:45.439 [2024-07-15 17:25:56.117369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:45.439 [2024-07-15 17:25:56.117509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.121253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.121402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.439 [2024-07-15 17:25:56.121547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:19:45.439 [2024-07-15 17:25:56.121597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.130126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.130327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:45.439 [2024-07-15 17:25:56.130464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:19:45.439 [2024-07-15 17:25:56.130516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.132538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.132689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.439 [2024-07-15 17:25:56.132798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:19:45.439 [2024-07-15 17:25:56.132845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.136838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.136991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.439 [2024-07-15 17:25:56.137157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.916 ms 00:19:45.439 [2024-07-15 17:25:56.137223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.137512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.137654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.439 [2024-07-15 17:25:56.137768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:45.439 [2024-07-15 17:25:56.137909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.140761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.140818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:45.439 [2024-07-15 17:25:56.140835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:19:45.439 [2024-07-15 17:25:56.140848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.142608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.142646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:45.439 [2024-07-15 17:25:56.142661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:19:45.439 [2024-07-15 17:25:56.142673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.143808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.143848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:45.439 [2024-07-15 17:25:56.143864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:19:45.439 [2024-07-15 17:25:56.143876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.144919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.439 [2024-07-15 17:25:56.144957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:45.439 [2024-07-15 17:25:56.144974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:19:45.439 [2024-07-15 17:25:56.144986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.439 [2024-07-15 17:25:56.145035] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:45.439 [2024-07-15 17:25:56.145062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:45.439 [2024-07-15 17:25:56.145682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.145998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:45.440 [2024-07-15 17:25:56.146457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:45.440 [2024-07-15 17:25:56.146471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2767dea2-a9dd-4cc3-b756-0d973e3d9830 00:19:45.440 [2024-07-15 17:25:56.146484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:45.440 [2024-07-15 17:25:56.146496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:45.440 [2024-07-15 17:25:56.146508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:45.440 [2024-07-15 17:25:56.146545] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:45.440 [2024-07-15 17:25:56.146558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:45.440 [2024-07-15 17:25:56.146577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:45.440 [2024-07-15 17:25:56.146589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:45.440 [2024-07-15 17:25:56.146600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:45.440 [2024-07-15 17:25:56.146612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:45.440 [2024-07-15 17:25:56.146634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.440 [2024-07-15 17:25:56.146658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:45.440 [2024-07-15 17:25:56.146673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:19:45.440 [2024-07-15 17:25:56.146685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.149719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.440 [2024-07-15 17:25:56.149763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:45.440 [2024-07-15 17:25:56.149779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:19:45.440 [2024-07-15 17:25:56.149791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.149977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.440 [2024-07-15 17:25:56.149997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:45.440 [2024-07-15 17:25:56.150012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:19:45.440 [2024-07-15 17:25:56.150024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.160082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.160164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.440 [2024-07-15 17:25:56.160182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.160196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.160344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.160415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.440 [2024-07-15 17:25:56.160432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.160445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.160523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.160545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.440 [2024-07-15 17:25:56.160566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.160580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.160608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.160623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.440 [2024-07-15 17:25:56.160637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.160649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.186455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.186536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.440 [2024-07-15 17:25:56.186559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.186573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.201152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.201238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.440 [2024-07-15 17:25:56.201259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.201272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.440 [2024-07-15 17:25:56.201390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.440 [2024-07-15 17:25:56.201410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.440 [2024-07-15 17:25:56.201424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.440 [2024-07-15 17:25:56.201447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.201503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.441 [2024-07-15 17:25:56.201519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.441 [2024-07-15 17:25:56.201532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.441 [2024-07-15 17:25:56.201556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.201665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.441 [2024-07-15 17:25:56.201686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.441 [2024-07-15 17:25:56.201712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.441 [2024-07-15 17:25:56.201724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.201793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.441 [2024-07-15 17:25:56.201814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:45.441 [2024-07-15 17:25:56.201828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.441 [2024-07-15 17:25:56.201841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.201909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.441 [2024-07-15 17:25:56.201935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.441 [2024-07-15 17:25:56.201950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.441 [2024-07-15 17:25:56.201962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.202053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.441 [2024-07-15 17:25:56.202074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.441 [2024-07-15 17:25:56.202087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.441 [2024-07-15 17:25:56.202101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.441 [2024-07-15 17:25:56.202373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.470 ms, result 0 00:19:45.699 00:19:45.699 00:19:45.957 17:25:56 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:46.522 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:46.522 Process with pid 93112 is not found 00:19:46.522 17:25:57 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 93112 00:19:46.522 17:25:57 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 93112 ']' 00:19:46.522 17:25:57 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 93112 00:19:46.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93112) - No such process 00:19:46.522 17:25:57 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 93112 is not found' 00:19:46.522 ************************************ 00:19:46.522 END TEST ftl_trim 00:19:46.522 ************************************ 00:19:46.522 00:19:46.522 real 0m58.165s 00:19:46.522 user 1m20.873s 00:19:46.522 sys 0m7.505s 00:19:46.522 17:25:57 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.522 17:25:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:46.522 17:25:57 ftl -- common/autotest_common.sh@1142 -- # return 0 00:19:46.522 17:25:57 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:46.522 17:25:57 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:46.522 17:25:57 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.522 17:25:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:46.522 ************************************ 00:19:46.522 START TEST ftl_restore 00:19:46.522 ************************************ 00:19:46.522 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:46.780 * Looking for test storage... 00:19:46.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VXJ6OMLSY5 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=93344 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 93344 00:19:46.780 17:25:57 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 93344 ']' 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.780 17:25:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:46.780 [2024-07-15 17:25:57.557903] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:46.780 [2024-07-15 17:25:57.558235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93344 ] 00:19:47.038 [2024-07-15 17:25:57.705566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:47.038 [2024-07-15 17:25:57.726813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.038 [2024-07-15 17:25:57.858739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.972 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.972 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:47.972 17:25:58 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:48.229 17:25:58 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:48.229 17:25:58 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:48.229 17:25:58 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:48.229 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:48.229 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:48.229 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:48.229 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:48.229 17:25:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:48.488 { 00:19:48.488 "name": "nvme0n1", 00:19:48.488 "aliases": [ 00:19:48.488 "4e55e4ac-5659-4e8b-8934-cd762d69e936" 00:19:48.488 ], 00:19:48.488 "product_name": "NVMe disk", 00:19:48.488 "block_size": 4096, 00:19:48.488 "num_blocks": 1310720, 00:19:48.488 "uuid": "4e55e4ac-5659-4e8b-8934-cd762d69e936", 00:19:48.488 "assigned_rate_limits": { 00:19:48.488 "rw_ios_per_sec": 0, 00:19:48.488 "rw_mbytes_per_sec": 0, 00:19:48.488 "r_mbytes_per_sec": 0, 00:19:48.488 "w_mbytes_per_sec": 0 00:19:48.488 }, 00:19:48.488 "claimed": true, 00:19:48.488 "claim_type": "read_many_write_one", 00:19:48.488 "zoned": false, 00:19:48.488 "supported_io_types": { 00:19:48.488 "read": true, 00:19:48.488 "write": true, 00:19:48.488 "unmap": true, 00:19:48.488 "flush": true, 00:19:48.488 "reset": true, 00:19:48.488 "nvme_admin": true, 00:19:48.488 "nvme_io": true, 00:19:48.488 "nvme_io_md": false, 00:19:48.488 "write_zeroes": true, 00:19:48.488 "zcopy": false, 00:19:48.488 "get_zone_info": false, 00:19:48.488 "zone_management": false, 00:19:48.488 "zone_append": false, 00:19:48.488 "compare": true, 00:19:48.488 "compare_and_write": false, 00:19:48.488 "abort": true, 00:19:48.488 "seek_hole": false, 00:19:48.488 "seek_data": false, 00:19:48.488 "copy": true, 00:19:48.488 "nvme_iov_md": false 00:19:48.488 }, 00:19:48.488 "driver_specific": { 00:19:48.488 "nvme": [ 00:19:48.488 { 00:19:48.488 "pci_address": "0000:00:11.0", 00:19:48.488 "trid": { 00:19:48.488 "trtype": "PCIe", 00:19:48.488 "traddr": "0000:00:11.0" 00:19:48.488 }, 00:19:48.488 "ctrlr_data": { 00:19:48.488 "cntlid": 0, 00:19:48.488 "vendor_id": "0x1b36", 00:19:48.488 "model_number": "QEMU NVMe Ctrl", 00:19:48.488 "serial_number": "12341", 00:19:48.488 "firmware_revision": "8.0.0", 00:19:48.488 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:48.488 "oacs": { 00:19:48.488 "security": 0, 00:19:48.488 "format": 1, 00:19:48.488 "firmware": 0, 00:19:48.488 "ns_manage": 1 00:19:48.488 }, 00:19:48.488 "multi_ctrlr": false, 00:19:48.488 "ana_reporting": false 00:19:48.488 }, 00:19:48.488 "vs": { 00:19:48.488 "nvme_version": "1.4" 00:19:48.488 }, 00:19:48.488 "ns_data": { 00:19:48.488 "id": 1, 00:19:48.488 "can_share": false 00:19:48.488 } 00:19:48.488 } 00:19:48.488 ], 00:19:48.488 "mp_policy": "active_passive" 00:19:48.488 } 00:19:48.488 } 00:19:48.488 ]' 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:48.488 17:25:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:48.488 17:25:59 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:48.488 17:25:59 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:48.488 17:25:59 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:48.488 17:25:59 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:48.488 17:25:59 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:48.746 17:25:59 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=15cf0dda-ed68-4563-84f8-52cd4d47ba11 00:19:48.746 17:25:59 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:48.746 17:25:59 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15cf0dda-ed68-4563-84f8-52cd4d47ba11 00:19:49.311 17:25:59 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:49.311 17:26:00 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0fd2b964-0640-46c3-9bb7-c6cb8630dd9b 00:19:49.311 17:26:00 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0fd2b964-0640-46c3-9bb7-c6cb8630dd9b 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:49.876 17:26:00 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:49.876 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:49.876 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:49.876 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:49.876 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:49.876 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:50.134 { 00:19:50.134 "name": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:50.134 "aliases": [ 00:19:50.134 "lvs/nvme0n1p0" 00:19:50.134 ], 00:19:50.134 "product_name": "Logical Volume", 00:19:50.134 "block_size": 4096, 00:19:50.134 "num_blocks": 26476544, 00:19:50.134 "uuid": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:50.134 "assigned_rate_limits": { 00:19:50.134 "rw_ios_per_sec": 0, 00:19:50.134 "rw_mbytes_per_sec": 0, 00:19:50.134 "r_mbytes_per_sec": 0, 00:19:50.134 "w_mbytes_per_sec": 0 00:19:50.134 }, 00:19:50.134 "claimed": false, 00:19:50.134 "zoned": false, 00:19:50.134 "supported_io_types": { 00:19:50.134 "read": true, 00:19:50.134 "write": true, 00:19:50.134 "unmap": true, 00:19:50.134 "flush": false, 00:19:50.134 "reset": true, 00:19:50.134 "nvme_admin": false, 00:19:50.134 "nvme_io": false, 00:19:50.134 "nvme_io_md": false, 00:19:50.134 "write_zeroes": true, 00:19:50.134 "zcopy": false, 00:19:50.134 "get_zone_info": false, 00:19:50.134 "zone_management": false, 00:19:50.134 "zone_append": false, 00:19:50.134 "compare": false, 00:19:50.134 "compare_and_write": false, 00:19:50.134 "abort": false, 00:19:50.134 "seek_hole": true, 00:19:50.134 "seek_data": true, 00:19:50.134 "copy": false, 00:19:50.134 "nvme_iov_md": false 00:19:50.134 }, 00:19:50.134 "driver_specific": { 00:19:50.134 "lvol": { 00:19:50.134 "lvol_store_uuid": "0fd2b964-0640-46c3-9bb7-c6cb8630dd9b", 00:19:50.134 "base_bdev": "nvme0n1", 00:19:50.134 "thin_provision": true, 00:19:50.134 "num_allocated_clusters": 0, 00:19:50.134 "snapshot": false, 00:19:50.134 "clone": false, 00:19:50.134 "esnap_clone": false 00:19:50.134 } 00:19:50.134 } 00:19:50.134 } 00:19:50.134 ]' 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:50.134 17:26:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:50.134 17:26:00 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:50.134 17:26:00 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:50.134 17:26:00 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:50.390 17:26:01 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:50.390 17:26:01 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:50.390 17:26:01 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:50.390 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:50.390 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:50.390 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:50.390 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:50.390 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:50.647 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:50.647 { 00:19:50.647 "name": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:50.647 "aliases": [ 00:19:50.647 "lvs/nvme0n1p0" 00:19:50.647 ], 00:19:50.647 "product_name": "Logical Volume", 00:19:50.647 "block_size": 4096, 00:19:50.647 "num_blocks": 26476544, 00:19:50.647 "uuid": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:50.647 "assigned_rate_limits": { 00:19:50.647 "rw_ios_per_sec": 0, 00:19:50.647 "rw_mbytes_per_sec": 0, 00:19:50.647 "r_mbytes_per_sec": 0, 00:19:50.647 "w_mbytes_per_sec": 0 00:19:50.647 }, 00:19:50.647 "claimed": false, 00:19:50.647 "zoned": false, 00:19:50.647 "supported_io_types": { 00:19:50.647 "read": true, 00:19:50.647 "write": true, 00:19:50.647 "unmap": true, 00:19:50.647 "flush": false, 00:19:50.647 "reset": true, 00:19:50.647 "nvme_admin": false, 00:19:50.647 "nvme_io": false, 00:19:50.647 "nvme_io_md": false, 00:19:50.647 "write_zeroes": true, 00:19:50.647 "zcopy": false, 00:19:50.647 "get_zone_info": false, 00:19:50.647 "zone_management": false, 00:19:50.647 "zone_append": false, 00:19:50.647 "compare": false, 00:19:50.648 "compare_and_write": false, 00:19:50.648 "abort": false, 00:19:50.648 "seek_hole": true, 00:19:50.648 "seek_data": true, 00:19:50.648 "copy": false, 00:19:50.648 "nvme_iov_md": false 00:19:50.648 }, 00:19:50.648 "driver_specific": { 00:19:50.648 "lvol": { 00:19:50.648 "lvol_store_uuid": "0fd2b964-0640-46c3-9bb7-c6cb8630dd9b", 00:19:50.648 "base_bdev": "nvme0n1", 00:19:50.648 "thin_provision": true, 00:19:50.648 "num_allocated_clusters": 0, 00:19:50.648 "snapshot": false, 00:19:50.648 "clone": false, 00:19:50.648 "esnap_clone": false 00:19:50.648 } 00:19:50.648 } 00:19:50.648 } 00:19:50.648 ]' 00:19:50.648 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:50.648 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:50.648 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:50.904 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:50.904 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:50.904 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:50.904 17:26:01 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:50.904 17:26:01 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:51.161 17:26:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:51.161 17:26:01 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:51.161 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:51.161 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:51.161 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:51.161 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:51.161 17:26:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f35a35e1-05fa-4b79-968f-b62716ccf9d3 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:51.419 { 00:19:51.419 "name": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:51.419 "aliases": [ 00:19:51.419 "lvs/nvme0n1p0" 00:19:51.419 ], 00:19:51.419 "product_name": "Logical Volume", 00:19:51.419 "block_size": 4096, 00:19:51.419 "num_blocks": 26476544, 00:19:51.419 "uuid": "f35a35e1-05fa-4b79-968f-b62716ccf9d3", 00:19:51.419 "assigned_rate_limits": { 00:19:51.419 "rw_ios_per_sec": 0, 00:19:51.419 "rw_mbytes_per_sec": 0, 00:19:51.419 "r_mbytes_per_sec": 0, 00:19:51.419 "w_mbytes_per_sec": 0 00:19:51.419 }, 00:19:51.419 "claimed": false, 00:19:51.419 "zoned": false, 00:19:51.419 "supported_io_types": { 00:19:51.419 "read": true, 00:19:51.419 "write": true, 00:19:51.419 "unmap": true, 00:19:51.419 "flush": false, 00:19:51.419 "reset": true, 00:19:51.419 "nvme_admin": false, 00:19:51.419 "nvme_io": false, 00:19:51.419 "nvme_io_md": false, 00:19:51.419 "write_zeroes": true, 00:19:51.419 "zcopy": false, 00:19:51.419 "get_zone_info": false, 00:19:51.419 "zone_management": false, 00:19:51.419 "zone_append": false, 00:19:51.419 "compare": false, 00:19:51.419 "compare_and_write": false, 00:19:51.419 "abort": false, 00:19:51.419 "seek_hole": true, 00:19:51.419 "seek_data": true, 00:19:51.419 "copy": false, 00:19:51.419 "nvme_iov_md": false 00:19:51.419 }, 00:19:51.419 "driver_specific": { 00:19:51.419 "lvol": { 00:19:51.419 "lvol_store_uuid": "0fd2b964-0640-46c3-9bb7-c6cb8630dd9b", 00:19:51.419 "base_bdev": "nvme0n1", 00:19:51.419 "thin_provision": true, 00:19:51.419 "num_allocated_clusters": 0, 00:19:51.419 "snapshot": false, 00:19:51.419 "clone": false, 00:19:51.419 "esnap_clone": false 00:19:51.419 } 00:19:51.419 } 00:19:51.419 } 00:19:51.419 ]' 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:51.419 17:26:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f35a35e1-05fa-4b79-968f-b62716ccf9d3 --l2p_dram_limit 10' 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:51.419 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:51.419 17:26:02 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f35a35e1-05fa-4b79-968f-b62716ccf9d3 --l2p_dram_limit 10 -c nvc0n1p0 00:19:51.678 [2024-07-15 17:26:02.462430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.462512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.678 [2024-07-15 17:26:02.462539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:51.678 [2024-07-15 17:26:02.462556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.462652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.462679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.678 [2024-07-15 17:26:02.462693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:51.678 [2024-07-15 17:26:02.462726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.462759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.678 [2024-07-15 17:26:02.463183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.678 [2024-07-15 17:26:02.463208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.463224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.678 [2024-07-15 17:26:02.463238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:19:51.678 [2024-07-15 17:26:02.463255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.463430] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 20dc4d38-671b-47b6-9eb1-0791517c23db 00:19:51.678 [2024-07-15 17:26:02.465912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.465956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:51.678 [2024-07-15 17:26:02.465989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:51.678 [2024-07-15 17:26:02.466003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.480510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.480592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.678 [2024-07-15 17:26:02.480637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.404 ms 00:19:51.678 [2024-07-15 17:26:02.480655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.480849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.480872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.678 [2024-07-15 17:26:02.480891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:51.678 [2024-07-15 17:26:02.480904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.481037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.481058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:51.678 [2024-07-15 17:26:02.481075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:51.678 [2024-07-15 17:26:02.481088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.481136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.678 [2024-07-15 17:26:02.484202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.484247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.678 [2024-07-15 17:26:02.484263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:19:51.678 [2024-07-15 17:26:02.484279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.484332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.484352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:51.678 [2024-07-15 17:26:02.484385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:51.678 [2024-07-15 17:26:02.484406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.484439] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:51.678 [2024-07-15 17:26:02.484626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:51.678 [2024-07-15 17:26:02.484648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:51.678 [2024-07-15 17:26:02.484668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:51.678 [2024-07-15 17:26:02.484685] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:51.678 [2024-07-15 17:26:02.484709] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:51.678 [2024-07-15 17:26:02.484722] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:51.678 [2024-07-15 17:26:02.484737] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:51.678 [2024-07-15 17:26:02.484749] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:51.678 [2024-07-15 17:26:02.484764] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:51.678 [2024-07-15 17:26:02.484776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.484791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:51.678 [2024-07-15 17:26:02.484804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:51.678 [2024-07-15 17:26:02.484819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.484912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.678 [2024-07-15 17:26:02.484950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:51.678 [2024-07-15 17:26:02.484966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:51.678 [2024-07-15 17:26:02.484981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.678 [2024-07-15 17:26:02.485120] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:51.678 [2024-07-15 17:26:02.485170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:51.678 [2024-07-15 17:26:02.485186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.678 [2024-07-15 17:26:02.485215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.678 [2024-07-15 17:26:02.485227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:51.678 [2024-07-15 17:26:02.485242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:51.678 [2024-07-15 17:26:02.485253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:51.678 [2024-07-15 17:26:02.485267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:51.678 [2024-07-15 17:26:02.485278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:51.678 [2024-07-15 17:26:02.485291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.678 [2024-07-15 17:26:02.485302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:51.678 [2024-07-15 17:26:02.485315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:51.678 [2024-07-15 17:26:02.485326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.678 [2024-07-15 17:26:02.485344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:51.678 [2024-07-15 17:26:02.485355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:51.678 [2024-07-15 17:26:02.485385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.678 [2024-07-15 17:26:02.485398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:51.679 [2024-07-15 17:26:02.485412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:51.679 [2024-07-15 17:26:02.485449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:51.679 [2024-07-15 17:26:02.485490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:51.679 [2024-07-15 17:26:02.485526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:51.679 [2024-07-15 17:26:02.485567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:51.679 [2024-07-15 17:26:02.485606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.679 [2024-07-15 17:26:02.485631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:51.679 [2024-07-15 17:26:02.485644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:51.679 [2024-07-15 17:26:02.485655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.679 [2024-07-15 17:26:02.485669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:51.679 [2024-07-15 17:26:02.485681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:51.679 [2024-07-15 17:26:02.485694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:51.679 [2024-07-15 17:26:02.485719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:51.679 [2024-07-15 17:26:02.485730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485744] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:51.679 [2024-07-15 17:26:02.485756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:51.679 [2024-07-15 17:26:02.485775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.679 [2024-07-15 17:26:02.485806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:51.679 [2024-07-15 17:26:02.485817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:51.679 [2024-07-15 17:26:02.485831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:51.679 [2024-07-15 17:26:02.485843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:51.679 [2024-07-15 17:26:02.485859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:51.679 [2024-07-15 17:26:02.485871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:51.679 [2024-07-15 17:26:02.485891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:51.679 [2024-07-15 17:26:02.485907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.485926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:51.679 [2024-07-15 17:26:02.485939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:51.679 [2024-07-15 17:26:02.485955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:51.679 [2024-07-15 17:26:02.485967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:51.679 [2024-07-15 17:26:02.485982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:51.679 [2024-07-15 17:26:02.485994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:51.679 [2024-07-15 17:26:02.486012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:51.679 [2024-07-15 17:26:02.486024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:51.679 [2024-07-15 17:26:02.486039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:51.679 [2024-07-15 17:26:02.486051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:51.679 [2024-07-15 17:26:02.486120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:51.679 [2024-07-15 17:26:02.486134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:51.679 [2024-07-15 17:26:02.486163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:51.679 [2024-07-15 17:26:02.486178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:51.679 [2024-07-15 17:26:02.486190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:51.679 [2024-07-15 17:26:02.486206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.679 [2024-07-15 17:26:02.486219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:51.679 [2024-07-15 17:26:02.486238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:19:51.679 [2024-07-15 17:26:02.486250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.679 [2024-07-15 17:26:02.486320] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:51.679 [2024-07-15 17:26:02.486342] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:54.963 [2024-07-15 17:26:05.430707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.430797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:54.963 [2024-07-15 17:26:05.430829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2944.383 ms 00:19:54.963 [2024-07-15 17:26:05.430847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.447454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.447540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.963 [2024-07-15 17:26:05.447575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.445 ms 00:19:54.963 [2024-07-15 17:26:05.447612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.447846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.447874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.963 [2024-07-15 17:26:05.447895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:54.963 [2024-07-15 17:26:05.447921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.463277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.463389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.963 [2024-07-15 17:26:05.463438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.266 ms 00:19:54.963 [2024-07-15 17:26:05.463454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.463558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.463577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.963 [2024-07-15 17:26:05.463596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.963 [2024-07-15 17:26:05.463625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.464404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.464457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.963 [2024-07-15 17:26:05.464482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:19:54.963 [2024-07-15 17:26:05.464512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.464724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.464745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.963 [2024-07-15 17:26:05.464764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:19:54.963 [2024-07-15 17:26:05.464779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.476384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.476459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.963 [2024-07-15 17:26:05.476499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.565 ms 00:19:54.963 [2024-07-15 17:26:05.476538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.489570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:54.963 [2024-07-15 17:26:05.494297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.494349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.963 [2024-07-15 17:26:05.494394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.579 ms 00:19:54.963 [2024-07-15 17:26:05.494432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.575643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.575758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:54.963 [2024-07-15 17:26:05.575786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.127 ms 00:19:54.963 [2024-07-15 17:26:05.575810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.576137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.576168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.963 [2024-07-15 17:26:05.576187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:54.963 [2024-07-15 17:26:05.576205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.580672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.580739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:54.963 [2024-07-15 17:26:05.580764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:19:54.963 [2024-07-15 17:26:05.580782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.584580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.584645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:54.963 [2024-07-15 17:26:05.584669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.738 ms 00:19:54.963 [2024-07-15 17:26:05.584687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.585255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.585295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.963 [2024-07-15 17:26:05.585315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:54.963 [2024-07-15 17:26:05.585336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.629883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.629989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:54.963 [2024-07-15 17:26:05.630016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.490 ms 00:19:54.963 [2024-07-15 17:26:05.630035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.636891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.636988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:54.963 [2024-07-15 17:26:05.637015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.785 ms 00:19:54.963 [2024-07-15 17:26:05.637035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.963 [2024-07-15 17:26:05.643341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.963 [2024-07-15 17:26:05.643452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:54.963 [2024-07-15 17:26:05.643477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:19:54.963 [2024-07-15 17:26:05.643495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.964 [2024-07-15 17:26:05.648388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.964 [2024-07-15 17:26:05.648448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.964 [2024-07-15 17:26:05.648473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:19:54.964 [2024-07-15 17:26:05.648495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.964 [2024-07-15 17:26:05.648578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.964 [2024-07-15 17:26:05.648609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.964 [2024-07-15 17:26:05.648627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:54.964 [2024-07-15 17:26:05.648645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.964 [2024-07-15 17:26:05.648770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.964 [2024-07-15 17:26:05.648799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.964 [2024-07-15 17:26:05.648820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:54.964 [2024-07-15 17:26:05.648838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.964 [2024-07-15 17:26:05.650617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3187.373 ms, result 0 00:19:54.964 { 00:19:54.964 "name": "ftl0", 00:19:54.964 "uuid": "20dc4d38-671b-47b6-9eb1-0791517c23db" 00:19:54.964 } 00:19:54.964 17:26:05 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:54.964 17:26:05 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:55.222 17:26:05 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:55.222 17:26:05 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:55.482 [2024-07-15 17:26:06.197572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.197646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:55.482 [2024-07-15 17:26:06.197688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.482 [2024-07-15 17:26:06.197717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.197761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.482 [2024-07-15 17:26:06.198701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.198754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:55.482 [2024-07-15 17:26:06.198771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:19:55.482 [2024-07-15 17:26:06.198796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.199096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.199130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:55.482 [2024-07-15 17:26:06.199145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:55.482 [2024-07-15 17:26:06.199160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.202458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.202508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:55.482 [2024-07-15 17:26:06.202524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:19:55.482 [2024-07-15 17:26:06.202538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.209115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.209159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:55.482 [2024-07-15 17:26:06.209194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.550 ms 00:19:55.482 [2024-07-15 17:26:06.209209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.210931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.211015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:55.482 [2024-07-15 17:26:06.211032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:19:55.482 [2024-07-15 17:26:06.211045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.216001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.216054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:55.482 [2024-07-15 17:26:06.216073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:19:55.482 [2024-07-15 17:26:06.216087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.216269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.216296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:55.482 [2024-07-15 17:26:06.216309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:55.482 [2024-07-15 17:26:06.216324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.218560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.218606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:55.482 [2024-07-15 17:26:06.218623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:19:55.482 [2024-07-15 17:26:06.218638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.220205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.220254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:55.482 [2024-07-15 17:26:06.220271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:19:55.482 [2024-07-15 17:26:06.220286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.221563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.221604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:55.482 [2024-07-15 17:26:06.221621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:19:55.482 [2024-07-15 17:26:06.221635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.222925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.482 [2024-07-15 17:26:06.222970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:55.482 [2024-07-15 17:26:06.222987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:19:55.482 [2024-07-15 17:26:06.223009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.482 [2024-07-15 17:26:06.223051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:55.482 [2024-07-15 17:26:06.223080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:55.482 [2024-07-15 17:26:06.223681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.223988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:55.483 [2024-07-15 17:26:06.224580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:55.483 [2024-07-15 17:26:06.224593] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20dc4d38-671b-47b6-9eb1-0791517c23db 00:19:55.483 [2024-07-15 17:26:06.224609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:55.483 [2024-07-15 17:26:06.224620] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:55.483 [2024-07-15 17:26:06.224636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:55.483 [2024-07-15 17:26:06.224652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:55.483 [2024-07-15 17:26:06.224666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:55.483 [2024-07-15 17:26:06.224679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:55.483 [2024-07-15 17:26:06.224693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:55.483 [2024-07-15 17:26:06.224704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:55.483 [2024-07-15 17:26:06.224717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:55.483 [2024-07-15 17:26:06.224729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.483 [2024-07-15 17:26:06.224744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:55.483 [2024-07-15 17:26:06.224758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:19:55.483 [2024-07-15 17:26:06.224773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.227288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.483 [2024-07-15 17:26:06.227466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:55.483 [2024-07-15 17:26:06.227598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:19:55.483 [2024-07-15 17:26:06.227654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.227918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.483 [2024-07-15 17:26:06.227974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:55.483 [2024-07-15 17:26:06.228016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:55.483 [2024-07-15 17:26:06.228132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.236519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.236790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.483 [2024-07-15 17:26:06.236908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.483 [2024-07-15 17:26:06.236976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.237095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.237216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.483 [2024-07-15 17:26:06.237283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.483 [2024-07-15 17:26:06.237334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.237551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.237639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.483 [2024-07-15 17:26:06.237698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.483 [2024-07-15 17:26:06.237751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.237869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.237930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.483 [2024-07-15 17:26:06.237982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.483 [2024-07-15 17:26:06.238112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.254559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.254889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.483 [2024-07-15 17:26:06.255022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.483 [2024-07-15 17:26:06.255079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.483 [2024-07-15 17:26:06.265491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.483 [2024-07-15 17:26:06.265782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.484 [2024-07-15 17:26:06.265912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.265968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.266120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.266282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.484 [2024-07-15 17:26:06.266341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.266490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.266587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.266612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.484 [2024-07-15 17:26:06.266628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.266644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.266753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.266779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.484 [2024-07-15 17:26:06.266793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.266808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.266873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.266904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:55.484 [2024-07-15 17:26:06.266917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.266931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.266985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.267008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.484 [2024-07-15 17:26:06.267021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.267035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.267096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.484 [2024-07-15 17:26:06.267117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.484 [2024-07-15 17:26:06.267131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.484 [2024-07-15 17:26:06.267146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.484 [2024-07-15 17:26:06.267318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.716 ms, result 0 00:19:55.484 true 00:19:55.484 17:26:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 93344 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 93344 ']' 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 93344 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93344 00:19:55.484 killing process with pid 93344 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93344' 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 93344 00:19:55.484 17:26:06 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 93344 00:19:58.766 17:26:09 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:04.021 262144+0 records in 00:20:04.021 262144+0 records out 00:20:04.021 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.90131 s, 219 MB/s 00:20:04.021 17:26:14 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:06.010 17:26:16 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.010 [2024-07-15 17:26:16.768157] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:20:06.010 [2024-07-15 17:26:16.768348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93579 ] 00:20:06.268 [2024-07-15 17:26:16.932073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:06.268 [2024-07-15 17:26:16.957187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.268 [2024-07-15 17:26:17.059793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.527 [2024-07-15 17:26:17.191895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.527 [2024-07-15 17:26:17.192013] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.527 [2024-07-15 17:26:17.354021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.354113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.527 [2024-07-15 17:26:17.354146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.527 [2024-07-15 17:26:17.354159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.354243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.354264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.527 [2024-07-15 17:26:17.354283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:06.527 [2024-07-15 17:26:17.354296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.354337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.527 [2024-07-15 17:26:17.354654] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.527 [2024-07-15 17:26:17.354681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.354694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.527 [2024-07-15 17:26:17.354713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:20:06.527 [2024-07-15 17:26:17.354726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.356713] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.527 [2024-07-15 17:26:17.359569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.359617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.527 [2024-07-15 17:26:17.359647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:20:06.527 [2024-07-15 17:26:17.359660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.359733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.359752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.527 [2024-07-15 17:26:17.359766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:06.527 [2024-07-15 17:26:17.359777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.368903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.368968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.527 [2024-07-15 17:26:17.369001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.045 ms 00:20:06.527 [2024-07-15 17:26:17.369013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.369120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.369143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.527 [2024-07-15 17:26:17.369171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:06.527 [2024-07-15 17:26:17.369184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.369260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.369285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.527 [2024-07-15 17:26:17.369299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:06.527 [2024-07-15 17:26:17.369311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.369349] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.527 [2024-07-15 17:26:17.371511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.371545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.527 [2024-07-15 17:26:17.371577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:20:06.527 [2024-07-15 17:26:17.371588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.371637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.371654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.527 [2024-07-15 17:26:17.371666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:06.527 [2024-07-15 17:26:17.371678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.371731] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.527 [2024-07-15 17:26:17.371764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.527 [2024-07-15 17:26:17.371811] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.527 [2024-07-15 17:26:17.371836] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:06.527 [2024-07-15 17:26:17.371942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.527 [2024-07-15 17:26:17.371958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.527 [2024-07-15 17:26:17.371973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:06.527 [2024-07-15 17:26:17.371990] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372004] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372017] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.527 [2024-07-15 17:26:17.372028] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.527 [2024-07-15 17:26:17.372039] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.527 [2024-07-15 17:26:17.372051] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.527 [2024-07-15 17:26:17.372077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.372095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.527 [2024-07-15 17:26:17.372108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:06.527 [2024-07-15 17:26:17.372119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.372218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.527 [2024-07-15 17:26:17.372234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.527 [2024-07-15 17:26:17.372246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:06.527 [2024-07-15 17:26:17.372258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.527 [2024-07-15 17:26:17.372364] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.527 [2024-07-15 17:26:17.372398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.527 [2024-07-15 17:26:17.372418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.527 [2024-07-15 17:26:17.372454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.527 [2024-07-15 17:26:17.372489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.527 [2024-07-15 17:26:17.372511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.527 [2024-07-15 17:26:17.372522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.527 [2024-07-15 17:26:17.372536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.527 [2024-07-15 17:26:17.372548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.527 [2024-07-15 17:26:17.372559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:06.527 [2024-07-15 17:26:17.372586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.527 [2024-07-15 17:26:17.372612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.527 [2024-07-15 17:26:17.372646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.527 [2024-07-15 17:26:17.372679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.527 [2024-07-15 17:26:17.372712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.527 [2024-07-15 17:26:17.372755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.527 [2024-07-15 17:26:17.372777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.527 [2024-07-15 17:26:17.372788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:06.527 [2024-07-15 17:26:17.372799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.527 [2024-07-15 17:26:17.372810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.527 [2024-07-15 17:26:17.372821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:06.528 [2024-07-15 17:26:17.372832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.528 [2024-07-15 17:26:17.372843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.528 [2024-07-15 17:26:17.372854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:06.528 [2024-07-15 17:26:17.372865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.528 [2024-07-15 17:26:17.372876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.528 [2024-07-15 17:26:17.372887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:06.528 [2024-07-15 17:26:17.372914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.528 [2024-07-15 17:26:17.372941] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.528 [2024-07-15 17:26:17.372970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.528 [2024-07-15 17:26:17.372983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.528 [2024-07-15 17:26:17.372995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.528 [2024-07-15 17:26:17.373006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.528 [2024-07-15 17:26:17.373020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.528 [2024-07-15 17:26:17.373033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.528 [2024-07-15 17:26:17.373044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.528 [2024-07-15 17:26:17.373055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.528 [2024-07-15 17:26:17.373067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.528 [2024-07-15 17:26:17.373079] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.528 [2024-07-15 17:26:17.373095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.528 [2024-07-15 17:26:17.373121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:06.528 [2024-07-15 17:26:17.373133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:06.528 [2024-07-15 17:26:17.373145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:06.528 [2024-07-15 17:26:17.373158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:06.528 [2024-07-15 17:26:17.373184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:06.528 [2024-07-15 17:26:17.373198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:06.528 [2024-07-15 17:26:17.373210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:06.528 [2024-07-15 17:26:17.373222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:06.528 [2024-07-15 17:26:17.373234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:06.528 [2024-07-15 17:26:17.373306] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.528 [2024-07-15 17:26:17.373320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.528 [2024-07-15 17:26:17.373346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.528 [2024-07-15 17:26:17.373370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.528 [2024-07-15 17:26:17.373385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.528 [2024-07-15 17:26:17.373398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.528 [2024-07-15 17:26:17.373419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.528 [2024-07-15 17:26:17.373432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:20:06.528 [2024-07-15 17:26:17.373443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.399799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.399866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.786 [2024-07-15 17:26:17.399921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.256 ms 00:20:06.786 [2024-07-15 17:26:17.399939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.400088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.400105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.786 [2024-07-15 17:26:17.400136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:06.786 [2024-07-15 17:26:17.400149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.414748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.414826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.786 [2024-07-15 17:26:17.414863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.493 ms 00:20:06.786 [2024-07-15 17:26:17.414876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.414962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.415000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.786 [2024-07-15 17:26:17.415013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.786 [2024-07-15 17:26:17.415024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.415695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.415743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.786 [2024-07-15 17:26:17.415760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:06.786 [2024-07-15 17:26:17.415772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.415950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.786 [2024-07-15 17:26:17.415975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.786 [2024-07-15 17:26:17.416003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:20:06.786 [2024-07-15 17:26:17.416016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.786 [2024-07-15 17:26:17.424139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.424205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.787 [2024-07-15 17:26:17.424225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:20:06.787 [2024-07-15 17:26:17.424237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.427458] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:06.787 [2024-07-15 17:26:17.427505] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.787 [2024-07-15 17:26:17.427571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.427585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.787 [2024-07-15 17:26:17.427612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:20:06.787 [2024-07-15 17:26:17.427624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.444718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.444761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.787 [2024-07-15 17:26:17.444796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.033 ms 00:20:06.787 [2024-07-15 17:26:17.444846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.447189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.447230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.787 [2024-07-15 17:26:17.447247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:20:06.787 [2024-07-15 17:26:17.447259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.449128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.449178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.787 [2024-07-15 17:26:17.449196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:20:06.787 [2024-07-15 17:26:17.449207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.449680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.449744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.787 [2024-07-15 17:26:17.449760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:20:06.787 [2024-07-15 17:26:17.449776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.473560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.473632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.787 [2024-07-15 17:26:17.473654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.757 ms 00:20:06.787 [2024-07-15 17:26:17.473667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.482033] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:06.787 [2024-07-15 17:26:17.485874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.485912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.787 [2024-07-15 17:26:17.485930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.106 ms 00:20:06.787 [2024-07-15 17:26:17.485956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.486068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.486092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.787 [2024-07-15 17:26:17.486107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:06.787 [2024-07-15 17:26:17.486123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.486222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.486247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.787 [2024-07-15 17:26:17.486259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:06.787 [2024-07-15 17:26:17.486272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.486305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.486331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.787 [2024-07-15 17:26:17.486345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.787 [2024-07-15 17:26:17.486407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.486460] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.787 [2024-07-15 17:26:17.486478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.486490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.787 [2024-07-15 17:26:17.486511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:06.787 [2024-07-15 17:26:17.486523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.490739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.490783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.787 [2024-07-15 17:26:17.490802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.187 ms 00:20:06.787 [2024-07-15 17:26:17.490814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.490897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.787 [2024-07-15 17:26:17.490935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.787 [2024-07-15 17:26:17.490954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:06.787 [2024-07-15 17:26:17.490967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.787 [2024-07-15 17:26:17.492349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.825 ms, result 0 00:20:48.273  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 103/1024 [MB] (25 MBps) Copying: 128/1024 [MB] (24 MBps) Copying: 154/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (25 MBps) Copying: 229/1024 [MB] (24 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 301/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (25 MBps) Copying: 400/1024 [MB] (25 MBps) Copying: 425/1024 [MB] (25 MBps) Copying: 449/1024 [MB] (24 MBps) Copying: 474/1024 [MB] (24 MBps) Copying: 499/1024 [MB] (25 MBps) Copying: 524/1024 [MB] (25 MBps) Copying: 549/1024 [MB] (24 MBps) Copying: 571/1024 [MB] (22 MBps) Copying: 595/1024 [MB] (24 MBps) Copying: 621/1024 [MB] (25 MBps) Copying: 647/1024 [MB] (25 MBps) Copying: 673/1024 [MB] (25 MBps) Copying: 699/1024 [MB] (25 MBps) Copying: 724/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (24 MBps) Copying: 772/1024 [MB] (24 MBps) Copying: 797/1024 [MB] (24 MBps) Copying: 819/1024 [MB] (21 MBps) Copying: 841/1024 [MB] (22 MBps) Copying: 865/1024 [MB] (23 MBps) Copying: 889/1024 [MB] (24 MBps) Copying: 914/1024 [MB] (25 MBps) Copying: 939/1024 [MB] (24 MBps) Copying: 963/1024 [MB] (24 MBps) Copying: 988/1024 [MB] (24 MBps) Copying: 1012/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 17:26:58.945858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.945965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.273 [2024-07-15 17:26:58.945991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:48.273 [2024-07-15 17:26:58.946005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.946055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.273 [2024-07-15 17:26:58.947355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.947397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.273 [2024-07-15 17:26:58.947414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:20:48.273 [2024-07-15 17:26:58.947427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.949350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.949418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.273 [2024-07-15 17:26:58.949450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:20:48.273 [2024-07-15 17:26:58.949463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.966879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.967007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.273 [2024-07-15 17:26:58.967032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.380 ms 00:20:48.273 [2024-07-15 17:26:58.967046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.973650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.973756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.273 [2024-07-15 17:26:58.973813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:20:48.273 [2024-07-15 17:26:58.973835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.976348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.976434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.273 [2024-07-15 17:26:58.976454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:20:48.273 [2024-07-15 17:26:58.976466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.980604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.980704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.273 [2024-07-15 17:26:58.980752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.086 ms 00:20:48.273 [2024-07-15 17:26:58.980765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.980941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.980968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.273 [2024-07-15 17:26:58.981000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:48.273 [2024-07-15 17:26:58.981013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.983157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.983204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:48.273 [2024-07-15 17:26:58.983221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:20:48.273 [2024-07-15 17:26:58.983234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.984795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.984834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:48.273 [2024-07-15 17:26:58.984850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:20:48.273 [2024-07-15 17:26:58.984863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.986128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.986169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.273 [2024-07-15 17:26:58.986185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:20:48.273 [2024-07-15 17:26:58.986197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.987453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.273 [2024-07-15 17:26:58.987495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.273 [2024-07-15 17:26:58.987511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:20:48.273 [2024-07-15 17:26:58.987546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.273 [2024-07-15 17:26:58.987587] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.273 [2024-07-15 17:26:58.987634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.273 [2024-07-15 17:26:58.987928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.987942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.987955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.987968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.987982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.987997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.988998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.989012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.989025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.274 [2024-07-15 17:26:58.989051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.274 [2024-07-15 17:26:58.989065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20dc4d38-671b-47b6-9eb1-0791517c23db 00:20:48.274 [2024-07-15 17:26:58.989079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.274 [2024-07-15 17:26:58.989105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.274 [2024-07-15 17:26:58.989118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.274 [2024-07-15 17:26:58.989131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.274 [2024-07-15 17:26:58.989143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.274 [2024-07-15 17:26:58.989164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.274 [2024-07-15 17:26:58.989176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.274 [2024-07-15 17:26:58.989187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.274 [2024-07-15 17:26:58.989199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.274 [2024-07-15 17:26:58.989211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.274 [2024-07-15 17:26:58.989243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.274 [2024-07-15 17:26:58.989257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:20:48.274 [2024-07-15 17:26:58.989281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.274 [2024-07-15 17:26:58.992470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.274 [2024-07-15 17:26:58.992516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.274 [2024-07-15 17:26:58.992533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:20:48.274 [2024-07-15 17:26:58.992555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:58.992768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.275 [2024-07-15 17:26:58.992793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.275 [2024-07-15 17:26:58.992808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:20:48.275 [2024-07-15 17:26:58.992820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.003261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.003622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.275 [2024-07-15 17:26:59.003749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.003916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.004067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.004124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.275 [2024-07-15 17:26:59.004283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.004469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.004640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.004704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.275 [2024-07-15 17:26:59.004873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.004924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.004999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.005146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.275 [2024-07-15 17:26:59.005199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.005258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.031867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.032255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.275 [2024-07-15 17:26:59.032397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.032537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.047259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.047650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.275 [2024-07-15 17:26:59.047832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.047988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.048141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.048194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.275 [2024-07-15 17:26:59.048290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.048339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.048529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.048595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.275 [2024-07-15 17:26:59.048754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.048808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.048954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.049016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.275 [2024-07-15 17:26:59.049131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.049154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.049248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.049277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.275 [2024-07-15 17:26:59.049314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.049326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.049399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.049417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.275 [2024-07-15 17:26:59.049430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.049444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.049536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.275 [2024-07-15 17:26:59.049559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.275 [2024-07-15 17:26:59.049573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.275 [2024-07-15 17:26:59.049585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.275 [2024-07-15 17:26:59.049780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 103.877 ms, result 0 00:20:49.208 00:20:49.208 00:20:49.208 17:26:59 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:49.208 [2024-07-15 17:26:59.982154] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:20:49.208 [2024-07-15 17:26:59.982383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94009 ] 00:20:49.466 [2024-07-15 17:27:00.136347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:49.466 [2024-07-15 17:27:00.155923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.466 [2024-07-15 17:27:00.287862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.743 [2024-07-15 17:27:00.452667] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.743 [2024-07-15 17:27:00.452780] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.003 [2024-07-15 17:27:00.617991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.618084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.003 [2024-07-15 17:27:00.618107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:50.003 [2024-07-15 17:27:00.618120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.618228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.618250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.003 [2024-07-15 17:27:00.618269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:50.003 [2024-07-15 17:27:00.618292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.618325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.003 [2024-07-15 17:27:00.618747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.003 [2024-07-15 17:27:00.618782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.618795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.003 [2024-07-15 17:27:00.618813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:20:50.003 [2024-07-15 17:27:00.618826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.621416] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.003 [2024-07-15 17:27:00.624953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.625007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.003 [2024-07-15 17:27:00.625026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.538 ms 00:20:50.003 [2024-07-15 17:27:00.625038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.625148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.625181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.003 [2024-07-15 17:27:00.625196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:50.003 [2024-07-15 17:27:00.625236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.637411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.637508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.003 [2024-07-15 17:27:00.637529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.087 ms 00:20:50.003 [2024-07-15 17:27:00.637542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.637716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.637742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.003 [2024-07-15 17:27:00.637763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:50.003 [2024-07-15 17:27:00.637776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.637927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.637951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.003 [2024-07-15 17:27:00.637966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:50.003 [2024-07-15 17:27:00.637977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.638020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.003 [2024-07-15 17:27:00.640810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.640847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.003 [2024-07-15 17:27:00.640863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:20:50.003 [2024-07-15 17:27:00.640875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.640938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.003 [2024-07-15 17:27:00.640956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.003 [2024-07-15 17:27:00.640970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:50.003 [2024-07-15 17:27:00.640981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.003 [2024-07-15 17:27:00.641011] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.003 [2024-07-15 17:27:00.641061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:50.003 [2024-07-15 17:27:00.641138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.003 [2024-07-15 17:27:00.641174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:50.003 [2024-07-15 17:27:00.641297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.003 [2024-07-15 17:27:00.641316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.004 [2024-07-15 17:27:00.641332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:50.004 [2024-07-15 17:27:00.641348] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.004 [2024-07-15 17:27:00.641378] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.004 [2024-07-15 17:27:00.641394] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.004 [2024-07-15 17:27:00.641406] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.004 [2024-07-15 17:27:00.641417] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.004 [2024-07-15 17:27:00.641429] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.004 [2024-07-15 17:27:00.641442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.641461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.004 [2024-07-15 17:27:00.641487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:20:50.004 [2024-07-15 17:27:00.641499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.641599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.641615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.004 [2024-07-15 17:27:00.641638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:50.004 [2024-07-15 17:27:00.641650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.641788] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.004 [2024-07-15 17:27:00.641808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.004 [2024-07-15 17:27:00.641830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.004 [2024-07-15 17:27:00.641842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.641855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.004 [2024-07-15 17:27:00.641866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.641877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.004 [2024-07-15 17:27:00.641890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.004 [2024-07-15 17:27:00.641902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.004 [2024-07-15 17:27:00.641912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.004 [2024-07-15 17:27:00.641924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.004 [2024-07-15 17:27:00.641934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.004 [2024-07-15 17:27:00.641949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.004 [2024-07-15 17:27:00.641961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.004 [2024-07-15 17:27:00.641972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:50.004 [2024-07-15 17:27:00.641998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.004 [2024-07-15 17:27:00.642025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.004 [2024-07-15 17:27:00.642059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.004 [2024-07-15 17:27:00.642093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.004 [2024-07-15 17:27:00.642126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.004 [2024-07-15 17:27:00.642170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.004 [2024-07-15 17:27:00.642202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.004 [2024-07-15 17:27:00.642224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.004 [2024-07-15 17:27:00.642235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:50.004 [2024-07-15 17:27:00.642246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.004 [2024-07-15 17:27:00.642257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.004 [2024-07-15 17:27:00.642268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:50.004 [2024-07-15 17:27:00.642278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.004 [2024-07-15 17:27:00.642300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:50.004 [2024-07-15 17:27:00.642312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642322] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.004 [2024-07-15 17:27:00.642338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.004 [2024-07-15 17:27:00.642350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.004 [2024-07-15 17:27:00.642390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.004 [2024-07-15 17:27:00.642403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.004 [2024-07-15 17:27:00.642415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.004 [2024-07-15 17:27:00.642426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.004 [2024-07-15 17:27:00.642437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.004 [2024-07-15 17:27:00.642449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.004 [2024-07-15 17:27:00.642462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.004 [2024-07-15 17:27:00.642477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.004 [2024-07-15 17:27:00.642502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:50.004 [2024-07-15 17:27:00.642514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:50.004 [2024-07-15 17:27:00.642526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:50.004 [2024-07-15 17:27:00.642538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:50.004 [2024-07-15 17:27:00.642554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:50.004 [2024-07-15 17:27:00.642567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:50.004 [2024-07-15 17:27:00.642580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:50.004 [2024-07-15 17:27:00.642592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:50.004 [2024-07-15 17:27:00.642604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:50.004 [2024-07-15 17:27:00.642662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.004 [2024-07-15 17:27:00.642675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.004 [2024-07-15 17:27:00.642701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.004 [2024-07-15 17:27:00.642712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.004 [2024-07-15 17:27:00.642725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.004 [2024-07-15 17:27:00.642738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.642759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.004 [2024-07-15 17:27:00.642771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:20:50.004 [2024-07-15 17:27:00.642783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.673911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.674010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.004 [2024-07-15 17:27:00.674042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.050 ms 00:20:50.004 [2024-07-15 17:27:00.674069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.674265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.674288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.004 [2024-07-15 17:27:00.674316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:50.004 [2024-07-15 17:27:00.674332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.692016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.004 [2024-07-15 17:27:00.692103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.004 [2024-07-15 17:27:00.692126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.515 ms 00:20:50.004 [2024-07-15 17:27:00.692153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.004 [2024-07-15 17:27:00.692263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.692289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.005 [2024-07-15 17:27:00.692304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.005 [2024-07-15 17:27:00.692317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.693194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.693249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.005 [2024-07-15 17:27:00.693267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:20:50.005 [2024-07-15 17:27:00.693278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.693535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.693561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.005 [2024-07-15 17:27:00.693580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:20:50.005 [2024-07-15 17:27:00.693593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.703792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.703878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.005 [2024-07-15 17:27:00.703901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.156 ms 00:20:50.005 [2024-07-15 17:27:00.703913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.707984] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:50.005 [2024-07-15 17:27:00.708060] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.005 [2024-07-15 17:27:00.708102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.708117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.005 [2024-07-15 17:27:00.708133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:20:50.005 [2024-07-15 17:27:00.708146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.724827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.724944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.005 [2024-07-15 17:27:00.725007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.605 ms 00:20:50.005 [2024-07-15 17:27:00.725029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.728956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.729023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.005 [2024-07-15 17:27:00.729043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.800 ms 00:20:50.005 [2024-07-15 17:27:00.729055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.730867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.730917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.005 [2024-07-15 17:27:00.730935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:20:50.005 [2024-07-15 17:27:00.730946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.731483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.731534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.005 [2024-07-15 17:27:00.731555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:20:50.005 [2024-07-15 17:27:00.731566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.762505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.762614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.005 [2024-07-15 17:27:00.762639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.908 ms 00:20:50.005 [2024-07-15 17:27:00.762653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.774730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.005 [2024-07-15 17:27:00.780537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.780621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.005 [2024-07-15 17:27:00.780661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:20:50.005 [2024-07-15 17:27:00.780674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.780858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.780880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.005 [2024-07-15 17:27:00.780900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:50.005 [2024-07-15 17:27:00.780912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.781034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.781054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.005 [2024-07-15 17:27:00.781068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:50.005 [2024-07-15 17:27:00.781080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.781130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.781145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.005 [2024-07-15 17:27:00.781159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.005 [2024-07-15 17:27:00.781183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.781247] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.005 [2024-07-15 17:27:00.781272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.781288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.005 [2024-07-15 17:27:00.781302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:50.005 [2024-07-15 17:27:00.781318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.786627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.786694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.005 [2024-07-15 17:27:00.786714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.265 ms 00:20:50.005 [2024-07-15 17:27:00.786726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.786821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.005 [2024-07-15 17:27:00.786850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.005 [2024-07-15 17:27:00.786864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:50.005 [2024-07-15 17:27:00.786876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.005 [2024-07-15 17:27:00.788495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 169.903 ms, result 0 00:21:31.751  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 151/1024 [MB] (25 MBps) Copying: 176/1024 [MB] (25 MBps) Copying: 200/1024 [MB] (24 MBps) Copying: 224/1024 [MB] (24 MBps) Copying: 249/1024 [MB] (24 MBps) Copying: 274/1024 [MB] (25 MBps) Copying: 299/1024 [MB] (24 MBps) Copying: 323/1024 [MB] (24 MBps) Copying: 347/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (25 MBps) Copying: 399/1024 [MB] (26 MBps) Copying: 424/1024 [MB] (25 MBps) Copying: 449/1024 [MB] (24 MBps) Copying: 474/1024 [MB] (25 MBps) Copying: 499/1024 [MB] (25 MBps) Copying: 524/1024 [MB] (25 MBps) Copying: 549/1024 [MB] (24 MBps) Copying: 574/1024 [MB] (24 MBps) Copying: 598/1024 [MB] (24 MBps) Copying: 622/1024 [MB] (23 MBps) Copying: 645/1024 [MB] (23 MBps) Copying: 670/1024 [MB] (24 MBps) Copying: 694/1024 [MB] (23 MBps) Copying: 719/1024 [MB] (24 MBps) Copying: 742/1024 [MB] (23 MBps) Copying: 768/1024 [MB] (25 MBps) Copying: 791/1024 [MB] (23 MBps) Copying: 815/1024 [MB] (23 MBps) Copying: 840/1024 [MB] (25 MBps) Copying: 865/1024 [MB] (24 MBps) Copying: 891/1024 [MB] (26 MBps) Copying: 916/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 996/1024 [MB] (28 MBps) Copying: 1021/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 17:27:42.442928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.443054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.751 [2024-07-15 17:27:42.443089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:31.751 [2024-07-15 17:27:42.443109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.443161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.751 [2024-07-15 17:27:42.444567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.444608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.751 [2024-07-15 17:27:42.444630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:21:31.751 [2024-07-15 17:27:42.444649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.445107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.445154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.751 [2024-07-15 17:27:42.445190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:21:31.751 [2024-07-15 17:27:42.445210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.451182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.451232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.751 [2024-07-15 17:27:42.451262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.938 ms 00:21:31.751 [2024-07-15 17:27:42.451286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.461023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.461071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.751 [2024-07-15 17:27:42.461101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.689 ms 00:21:31.751 [2024-07-15 17:27:42.461116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.463381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.463421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.751 [2024-07-15 17:27:42.463439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:21:31.751 [2024-07-15 17:27:42.463461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.467415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.467458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.751 [2024-07-15 17:27:42.467478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:21:31.751 [2024-07-15 17:27:42.467492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.467672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.467701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.751 [2024-07-15 17:27:42.467718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:31.751 [2024-07-15 17:27:42.467735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.469681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.469719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:31.751 [2024-07-15 17:27:42.469736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:21:31.751 [2024-07-15 17:27:42.469750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.471587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.471661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:31.751 [2024-07-15 17:27:42.471689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:21:31.751 [2024-07-15 17:27:42.471710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.473463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.473518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.751 [2024-07-15 17:27:42.473544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:21:31.751 [2024-07-15 17:27:42.473564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.475247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.751 [2024-07-15 17:27:42.475300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.751 [2024-07-15 17:27:42.475352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:21:31.751 [2024-07-15 17:27:42.475420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.751 [2024-07-15 17:27:42.475489] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.751 [2024-07-15 17:27:42.475537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.475988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.476010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.476033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.476060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.751 [2024-07-15 17:27:42.476082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.476992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.752 [2024-07-15 17:27:42.477968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.752 [2024-07-15 17:27:42.477997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20dc4d38-671b-47b6-9eb1-0791517c23db 00:21:31.752 [2024-07-15 17:27:42.478020] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:31.752 [2024-07-15 17:27:42.478041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:31.752 [2024-07-15 17:27:42.478061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:31.752 [2024-07-15 17:27:42.478083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:31.752 [2024-07-15 17:27:42.478114] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.752 [2024-07-15 17:27:42.478144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.752 [2024-07-15 17:27:42.478166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.752 [2024-07-15 17:27:42.478185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.752 [2024-07-15 17:27:42.478205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.752 [2024-07-15 17:27:42.478228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.752 [2024-07-15 17:27:42.478268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.752 [2024-07-15 17:27:42.478292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.741 ms 00:21:31.752 [2024-07-15 17:27:42.478312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.752 [2024-07-15 17:27:42.481861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.752 [2024-07-15 17:27:42.481897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.752 [2024-07-15 17:27:42.481922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:21:31.752 [2024-07-15 17:27:42.481937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.752 [2024-07-15 17:27:42.482144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.752 [2024-07-15 17:27:42.482166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.752 [2024-07-15 17:27:42.482182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:31.752 [2024-07-15 17:27:42.482197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.752 [2024-07-15 17:27:42.492582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.752 [2024-07-15 17:27:42.492631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.752 [2024-07-15 17:27:42.492673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.492688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.492787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.492806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.753 [2024-07-15 17:27:42.492821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.492835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.492948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.492971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.753 [2024-07-15 17:27:42.493000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.493021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.493049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.493066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.753 [2024-07-15 17:27:42.493092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.493107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.513836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.513929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.753 [2024-07-15 17:27:42.513965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.514005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.529794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.529874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.753 [2024-07-15 17:27:42.529897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.529912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.753 [2024-07-15 17:27:42.530086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.753 [2024-07-15 17:27:42.530207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.753 [2024-07-15 17:27:42.530419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.753 [2024-07-15 17:27:42.530564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.753 [2024-07-15 17:27:42.530676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.530782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.753 [2024-07-15 17:27:42.530804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.753 [2024-07-15 17:27:42.530854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.753 [2024-07-15 17:27:42.530869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.753 [2024-07-15 17:27:42.531111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.108 ms, result 0 00:21:32.316 00:21:32.316 00:21:32.316 17:27:43 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:34.844 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:34.844 17:27:45 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:34.844 [2024-07-15 17:27:45.310140] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:21:34.844 [2024-07-15 17:27:45.310346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94463 ] 00:21:34.844 [2024-07-15 17:27:45.463601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:34.844 [2024-07-15 17:27:45.488707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.844 [2024-07-15 17:27:45.630445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.103 [2024-07-15 17:27:45.793960] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.103 [2024-07-15 17:27:45.794070] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.103 [2024-07-15 17:27:45.958311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.103 [2024-07-15 17:27:45.958438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.103 [2024-07-15 17:27:45.958478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:35.103 [2024-07-15 17:27:45.958490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.103 [2024-07-15 17:27:45.958591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.103 [2024-07-15 17:27:45.958613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.103 [2024-07-15 17:27:45.958632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:35.103 [2024-07-15 17:27:45.958645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.103 [2024-07-15 17:27:45.958677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.103 [2024-07-15 17:27:45.959055] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.103 [2024-07-15 17:27:45.959090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.103 [2024-07-15 17:27:45.959104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.103 [2024-07-15 17:27:45.959130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:35.103 [2024-07-15 17:27:45.959144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.962047] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.363 [2024-07-15 17:27:45.966072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.966123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.363 [2024-07-15 17:27:45.966169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:21:35.363 [2024-07-15 17:27:45.966182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.966294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.966315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.363 [2024-07-15 17:27:45.966343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:35.363 [2024-07-15 17:27:45.966359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.979355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.979435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.363 [2024-07-15 17:27:45.979468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.898 ms 00:21:35.363 [2024-07-15 17:27:45.979480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.979646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.979667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.363 [2024-07-15 17:27:45.979686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:35.363 [2024-07-15 17:27:45.979699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.979797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.979820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.363 [2024-07-15 17:27:45.979833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:35.363 [2024-07-15 17:27:45.979845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.979884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.363 [2024-07-15 17:27:45.983034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.983237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.363 [2024-07-15 17:27:45.983377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:21:35.363 [2024-07-15 17:27:45.983433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.983652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.363 [2024-07-15 17:27:45.983712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.363 [2024-07-15 17:27:45.983757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:35.363 [2024-07-15 17:27:45.983795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.363 [2024-07-15 17:27:45.983872] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.364 [2024-07-15 17:27:45.984041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.364 [2024-07-15 17:27:45.984170] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.364 [2024-07-15 17:27:45.984311] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:35.364 [2024-07-15 17:27:45.984504] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.364 [2024-07-15 17:27:45.984612] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.364 [2024-07-15 17:27:45.984647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:35.364 [2024-07-15 17:27:45.984664] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.364 [2024-07-15 17:27:45.984679] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.364 [2024-07-15 17:27:45.984692] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.364 [2024-07-15 17:27:45.984714] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.364 [2024-07-15 17:27:45.984726] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.364 [2024-07-15 17:27:45.984738] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.364 [2024-07-15 17:27:45.984751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.364 [2024-07-15 17:27:45.984768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.364 [2024-07-15 17:27:45.984781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:21:35.364 [2024-07-15 17:27:45.984792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.364 [2024-07-15 17:27:45.984897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.364 [2024-07-15 17:27:45.984923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.364 [2024-07-15 17:27:45.984936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:35.364 [2024-07-15 17:27:45.984947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.364 [2024-07-15 17:27:45.985062] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.364 [2024-07-15 17:27:45.985079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.364 [2024-07-15 17:27:45.985099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.364 [2024-07-15 17:27:45.985147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.364 [2024-07-15 17:27:45.985179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.364 [2024-07-15 17:27:45.985200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.364 [2024-07-15 17:27:45.985211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.364 [2024-07-15 17:27:45.985226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.364 [2024-07-15 17:27:45.985238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.364 [2024-07-15 17:27:45.985249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.364 [2024-07-15 17:27:45.985300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.364 [2024-07-15 17:27:45.985322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.364 [2024-07-15 17:27:45.985355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.364 [2024-07-15 17:27:45.985414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.364 [2024-07-15 17:27:45.985450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.364 [2024-07-15 17:27:45.985501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.364 [2024-07-15 17:27:45.985535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.364 [2024-07-15 17:27:45.985557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.364 [2024-07-15 17:27:45.985568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.364 [2024-07-15 17:27:45.985579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.364 [2024-07-15 17:27:45.985590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.364 [2024-07-15 17:27:45.985601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.364 [2024-07-15 17:27:45.985612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.364 [2024-07-15 17:27:45.985648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.364 [2024-07-15 17:27:45.985659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985685] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.364 [2024-07-15 17:27:45.985701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.364 [2024-07-15 17:27:45.985714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.364 [2024-07-15 17:27:45.985738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.364 [2024-07-15 17:27:45.985750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.364 [2024-07-15 17:27:45.985761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.364 [2024-07-15 17:27:45.985772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.364 [2024-07-15 17:27:45.985783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.364 [2024-07-15 17:27:45.985794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.364 [2024-07-15 17:27:45.985807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.364 [2024-07-15 17:27:45.985821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.985835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.364 [2024-07-15 17:27:45.985849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.364 [2024-07-15 17:27:45.985861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.364 [2024-07-15 17:27:45.985875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.364 [2024-07-15 17:27:45.985888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.364 [2024-07-15 17:27:45.985903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.364 [2024-07-15 17:27:45.985917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.364 [2024-07-15 17:27:45.985929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.364 [2024-07-15 17:27:45.985941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.364 [2024-07-15 17:27:45.985953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.985965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.985977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.985990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.986002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.364 [2024-07-15 17:27:45.986014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.364 [2024-07-15 17:27:45.986041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.986054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.364 [2024-07-15 17:27:45.986066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.364 [2024-07-15 17:27:45.986078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.364 [2024-07-15 17:27:45.986105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.364 [2024-07-15 17:27:45.986117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.364 [2024-07-15 17:27:45.986137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.364 [2024-07-15 17:27:45.986149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:21:35.364 [2024-07-15 17:27:45.986171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.364 [2024-07-15 17:27:46.017216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.364 [2024-07-15 17:27:46.017555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.364 [2024-07-15 17:27:46.017685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.970 ms 00:21:35.364 [2024-07-15 17:27:46.017750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.364 [2024-07-15 17:27:46.018068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.364 [2024-07-15 17:27:46.018144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.365 [2024-07-15 17:27:46.018313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:35.365 [2024-07-15 17:27:46.018399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.035989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.036255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.365 [2024-07-15 17:27:46.036391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.332 ms 00:21:35.365 [2024-07-15 17:27:46.036449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.036557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.036637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.365 [2024-07-15 17:27:46.036686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.365 [2024-07-15 17:27:46.036725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.037637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.037825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.365 [2024-07-15 17:27:46.037946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:21:35.365 [2024-07-15 17:27:46.037998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.038229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.038299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.365 [2024-07-15 17:27:46.038488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:21:35.365 [2024-07-15 17:27:46.038545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.048958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.049135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.365 [2024-07-15 17:27:46.049254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:21:35.365 [2024-07-15 17:27:46.049323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.053343] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:35.365 [2024-07-15 17:27:46.053545] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.365 [2024-07-15 17:27:46.053693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.053741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.365 [2024-07-15 17:27:46.053780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:21:35.365 [2024-07-15 17:27:46.053887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.070448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.070626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.365 [2024-07-15 17:27:46.070763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.455 ms 00:21:35.365 [2024-07-15 17:27:46.070823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.072858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.073011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.365 [2024-07-15 17:27:46.073152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:21:35.365 [2024-07-15 17:27:46.073205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.075064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.075218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.365 [2024-07-15 17:27:46.075339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:21:35.365 [2024-07-15 17:27:46.075421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.075951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.076095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.365 [2024-07-15 17:27:46.076206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:21:35.365 [2024-07-15 17:27:46.076256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.106324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.106725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.365 [2024-07-15 17:27:46.106762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.006 ms 00:21:35.365 [2024-07-15 17:27:46.106777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.115495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.365 [2024-07-15 17:27:46.120223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.120264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.365 [2024-07-15 17:27:46.120301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.282 ms 00:21:35.365 [2024-07-15 17:27:46.120313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.120477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.120500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.365 [2024-07-15 17:27:46.120521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.365 [2024-07-15 17:27:46.120533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.120676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.120696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.365 [2024-07-15 17:27:46.120710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:35.365 [2024-07-15 17:27:46.120722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.120759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.120774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.365 [2024-07-15 17:27:46.120787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:35.365 [2024-07-15 17:27:46.120798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.120852] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.365 [2024-07-15 17:27:46.120876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.120892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.365 [2024-07-15 17:27:46.120904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:35.365 [2024-07-15 17:27:46.120927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.126122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.126166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.365 [2024-07-15 17:27:46.126201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.167 ms 00:21:35.365 [2024-07-15 17:27:46.126213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.126312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.365 [2024-07-15 17:27:46.126341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.365 [2024-07-15 17:27:46.126354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:35.365 [2024-07-15 17:27:46.126408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.365 [2024-07-15 17:27:46.128053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 169.166 ms, result 0 00:22:18.022  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 190/1024 [MB] (24 MBps) Copying: 216/1024 [MB] (25 MBps) Copying: 241/1024 [MB] (25 MBps) Copying: 267/1024 [MB] (25 MBps) Copying: 293/1024 [MB] (25 MBps) Copying: 318/1024 [MB] (25 MBps) Copying: 344/1024 [MB] (26 MBps) Copying: 370/1024 [MB] (25 MBps) Copying: 396/1024 [MB] (25 MBps) Copying: 420/1024 [MB] (24 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 493/1024 [MB] (24 MBps) Copying: 517/1024 [MB] (23 MBps) Copying: 541/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (23 MBps) Copying: 587/1024 [MB] (22 MBps) Copying: 608/1024 [MB] (21 MBps) Copying: 631/1024 [MB] (22 MBps) Copying: 652/1024 [MB] (21 MBps) Copying: 676/1024 [MB] (24 MBps) Copying: 702/1024 [MB] (25 MBps) Copying: 728/1024 [MB] (26 MBps) Copying: 752/1024 [MB] (24 MBps) Copying: 777/1024 [MB] (25 MBps) Copying: 802/1024 [MB] (24 MBps) Copying: 827/1024 [MB] (25 MBps) Copying: 854/1024 [MB] (26 MBps) Copying: 881/1024 [MB] (27 MBps) Copying: 907/1024 [MB] (26 MBps) Copying: 933/1024 [MB] (26 MBps) Copying: 959/1024 [MB] (25 MBps) Copying: 985/1024 [MB] (25 MBps) Copying: 1012/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 17:28:28.689106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.689195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.022 [2024-07-15 17:28:28.689220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:18.022 [2024-07-15 17:28:28.689233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.692723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.022 [2024-07-15 17:28:28.697111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.697155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.022 [2024-07-15 17:28:28.697190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.305 ms 00:22:18.022 [2024-07-15 17:28:28.697202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.708818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.708868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.022 [2024-07-15 17:28:28.708888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.536 ms 00:22:18.022 [2024-07-15 17:28:28.708916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.729368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.729470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.022 [2024-07-15 17:28:28.729497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.424 ms 00:22:18.022 [2024-07-15 17:28:28.729510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.736039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.736087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:18.022 [2024-07-15 17:28:28.736119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.485 ms 00:22:18.022 [2024-07-15 17:28:28.736131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.738380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.738530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.022 [2024-07-15 17:28:28.738549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:22:18.022 [2024-07-15 17:28:28.738561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.742071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.742124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.022 [2024-07-15 17:28:28.742140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:22:18.022 [2024-07-15 17:28:28.742152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.843557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.843708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:18.022 [2024-07-15 17:28:28.843732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.355 ms 00:22:18.022 [2024-07-15 17:28:28.843745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.846374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.846430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:18.022 [2024-07-15 17:28:28.846463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:22:18.022 [2024-07-15 17:28:28.846474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.847922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.847958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:18.022 [2024-07-15 17:28:28.847973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:22:18.022 [2024-07-15 17:28:28.847985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.849210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.849250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:18.022 [2024-07-15 17:28:28.849265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:22:18.022 [2024-07-15 17:28:28.849275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.850392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.022 [2024-07-15 17:28:28.850428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:18.022 [2024-07-15 17:28:28.850459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:22:18.022 [2024-07-15 17:28:28.850470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.022 [2024-07-15 17:28:28.850508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:18.022 [2024-07-15 17:28:28.850532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120320 / 261120 wr_cnt: 1 state: open 00:22:18.022 [2024-07-15 17:28:28.850553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:18.022 [2024-07-15 17:28:28.850944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.850959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.850972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.850985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.850998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:18.023 [2024-07-15 17:28:28.851850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:18.023 [2024-07-15 17:28:28.851871] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20dc4d38-671b-47b6-9eb1-0791517c23db 00:22:18.023 [2024-07-15 17:28:28.851895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120320 00:22:18.023 [2024-07-15 17:28:28.851908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121280 00:22:18.023 [2024-07-15 17:28:28.851928] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120320 00:22:18.023 [2024-07-15 17:28:28.851941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:22:18.023 [2024-07-15 17:28:28.851953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:18.023 [2024-07-15 17:28:28.851965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:18.023 [2024-07-15 17:28:28.851977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:18.023 [2024-07-15 17:28:28.851988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:18.023 [2024-07-15 17:28:28.851998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:18.023 [2024-07-15 17:28:28.852010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.023 [2024-07-15 17:28:28.852022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:18.023 [2024-07-15 17:28:28.852034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:22:18.023 [2024-07-15 17:28:28.852046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.855121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.023 [2024-07-15 17:28:28.855262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:18.023 [2024-07-15 17:28:28.855395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:22:18.023 [2024-07-15 17:28:28.855512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.855746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.023 [2024-07-15 17:28:28.855798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:18.023 [2024-07-15 17:28:28.855894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:22:18.023 [2024-07-15 17:28:28.855941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.865400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.023 [2024-07-15 17:28:28.865623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.023 [2024-07-15 17:28:28.865775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.023 [2024-07-15 17:28:28.865826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.866007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.023 [2024-07-15 17:28:28.866158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.023 [2024-07-15 17:28:28.866280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.023 [2024-07-15 17:28:28.866344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.866579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.023 [2024-07-15 17:28:28.866711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.023 [2024-07-15 17:28:28.866764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.023 [2024-07-15 17:28:28.866887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.023 [2024-07-15 17:28:28.866962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.023 [2024-07-15 17:28:28.867051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.023 [2024-07-15 17:28:28.867153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.023 [2024-07-15 17:28:28.867211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.295 [2024-07-15 17:28:28.888709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.295 [2024-07-15 17:28:28.889026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.295 [2024-07-15 17:28:28.889145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.295 [2024-07-15 17:28:28.889264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.295 [2024-07-15 17:28:28.903784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.295 [2024-07-15 17:28:28.904068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.295 [2024-07-15 17:28:28.904196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.295 [2024-07-15 17:28:28.904246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.295 [2024-07-15 17:28:28.904396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.295 [2024-07-15 17:28:28.904454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.295 [2024-07-15 17:28:28.904552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.295 [2024-07-15 17:28:28.904601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.295 [2024-07-15 17:28:28.904686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.295 [2024-07-15 17:28:28.904792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.295 [2024-07-15 17:28:28.904863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.295 [2024-07-15 17:28:28.904900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.295 [2024-07-15 17:28:28.905050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.295 [2024-07-15 17:28:28.905126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.295 [2024-07-15 17:28:28.905172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.296 [2024-07-15 17:28:28.905208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.296 [2024-07-15 17:28:28.905287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.296 [2024-07-15 17:28:28.905353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.296 [2024-07-15 17:28:28.905392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.296 [2024-07-15 17:28:28.905404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.296 [2024-07-15 17:28:28.905471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.296 [2024-07-15 17:28:28.905487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.296 [2024-07-15 17:28:28.905499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.296 [2024-07-15 17:28:28.905510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.296 [2024-07-15 17:28:28.905576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.296 [2024-07-15 17:28:28.905594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.296 [2024-07-15 17:28:28.905619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.296 [2024-07-15 17:28:28.905631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.296 [2024-07-15 17:28:28.905809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.754 ms, result 0 00:22:19.224 00:22:19.224 00:22:19.224 17:28:29 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:19.224 [2024-07-15 17:28:29.887206] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:19.224 [2024-07-15 17:28:29.887428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94912 ] 00:22:19.224 [2024-07-15 17:28:30.048397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.224 [2024-07-15 17:28:30.072682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.480 [2024-07-15 17:28:30.193852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.739 [2024-07-15 17:28:30.355652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.739 [2024-07-15 17:28:30.355749] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.739 [2024-07-15 17:28:30.520369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.520477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:19.739 [2024-07-15 17:28:30.520515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:19.739 [2024-07-15 17:28:30.520528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.520620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.520642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.739 [2024-07-15 17:28:30.520661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:19.739 [2024-07-15 17:28:30.520683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.520725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:19.739 [2024-07-15 17:28:30.521057] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:19.739 [2024-07-15 17:28:30.521082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.521095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.739 [2024-07-15 17:28:30.521108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:22:19.739 [2024-07-15 17:28:30.521124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.523879] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:19.739 [2024-07-15 17:28:30.527728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.527778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:19.739 [2024-07-15 17:28:30.527797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.851 ms 00:22:19.739 [2024-07-15 17:28:30.527822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.527916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.527944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:19.739 [2024-07-15 17:28:30.527958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:19.739 [2024-07-15 17:28:30.527975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.540439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.540535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.739 [2024-07-15 17:28:30.540556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:22:19.739 [2024-07-15 17:28:30.540568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.540687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.540711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.739 [2024-07-15 17:28:30.540731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:19.739 [2024-07-15 17:28:30.540743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.540871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.540902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:19.739 [2024-07-15 17:28:30.540917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:19.739 [2024-07-15 17:28:30.540930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.540971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:19.739 [2024-07-15 17:28:30.543868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.543903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.739 [2024-07-15 17:28:30.543949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:22:19.739 [2024-07-15 17:28:30.543961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.544017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.544034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:19.739 [2024-07-15 17:28:30.544047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:19.739 [2024-07-15 17:28:30.544059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.544087] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:19.739 [2024-07-15 17:28:30.544119] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:19.739 [2024-07-15 17:28:30.544171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:19.739 [2024-07-15 17:28:30.544197] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:19.739 [2024-07-15 17:28:30.544300] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:19.739 [2024-07-15 17:28:30.544315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:19.739 [2024-07-15 17:28:30.544342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:19.739 [2024-07-15 17:28:30.544357] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:19.739 [2024-07-15 17:28:30.544371] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:19.739 [2024-07-15 17:28:30.544384] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:19.739 [2024-07-15 17:28:30.544411] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:19.739 [2024-07-15 17:28:30.544438] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:19.739 [2024-07-15 17:28:30.544450] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:19.739 [2024-07-15 17:28:30.544462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.544481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:19.739 [2024-07-15 17:28:30.544494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:22:19.739 [2024-07-15 17:28:30.544506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.544593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.739 [2024-07-15 17:28:30.544622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:19.739 [2024-07-15 17:28:30.544636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:19.739 [2024-07-15 17:28:30.544647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.739 [2024-07-15 17:28:30.544782] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:19.739 [2024-07-15 17:28:30.544815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:19.740 [2024-07-15 17:28:30.544835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.740 [2024-07-15 17:28:30.544847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.544859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:19.740 [2024-07-15 17:28:30.544870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.544882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:19.740 [2024-07-15 17:28:30.544894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:19.740 [2024-07-15 17:28:30.544904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:19.740 [2024-07-15 17:28:30.544915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.740 [2024-07-15 17:28:30.544926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:19.740 [2024-07-15 17:28:30.544936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:19.740 [2024-07-15 17:28:30.544947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:19.740 [2024-07-15 17:28:30.544962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:19.740 [2024-07-15 17:28:30.544974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:19.740 [2024-07-15 17:28:30.545000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:19.740 [2024-07-15 17:28:30.545024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:19.740 [2024-07-15 17:28:30.545058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:19.740 [2024-07-15 17:28:30.545091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:19.740 [2024-07-15 17:28:30.545124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:19.740 [2024-07-15 17:28:30.545161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:19.740 [2024-07-15 17:28:30.545195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.740 [2024-07-15 17:28:30.545217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:19.740 [2024-07-15 17:28:30.545228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:19.740 [2024-07-15 17:28:30.545239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:19.740 [2024-07-15 17:28:30.545250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:19.740 [2024-07-15 17:28:30.545261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:19.740 [2024-07-15 17:28:30.545272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:19.740 [2024-07-15 17:28:30.545321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:19.740 [2024-07-15 17:28:30.545334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545346] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:19.740 [2024-07-15 17:28:30.545359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:19.740 [2024-07-15 17:28:30.545400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:19.740 [2024-07-15 17:28:30.545428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:19.740 [2024-07-15 17:28:30.545440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:19.740 [2024-07-15 17:28:30.545455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:19.740 [2024-07-15 17:28:30.545467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:19.740 [2024-07-15 17:28:30.545478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:19.740 [2024-07-15 17:28:30.545490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:19.740 [2024-07-15 17:28:30.545504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:19.740 [2024-07-15 17:28:30.545519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:19.740 [2024-07-15 17:28:30.545557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:19.740 [2024-07-15 17:28:30.545570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:19.740 [2024-07-15 17:28:30.545583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:19.740 [2024-07-15 17:28:30.545596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:19.740 [2024-07-15 17:28:30.545608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:19.740 [2024-07-15 17:28:30.545625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:19.740 [2024-07-15 17:28:30.545654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:19.740 [2024-07-15 17:28:30.545666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:19.740 [2024-07-15 17:28:30.545679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:19.740 [2024-07-15 17:28:30.545740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:19.740 [2024-07-15 17:28:30.545769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:19.740 [2024-07-15 17:28:30.545795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:19.740 [2024-07-15 17:28:30.545807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:19.740 [2024-07-15 17:28:30.545819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:19.740 [2024-07-15 17:28:30.545832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.740 [2024-07-15 17:28:30.545848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:19.740 [2024-07-15 17:28:30.545864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:22:19.740 [2024-07-15 17:28:30.545876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.740 [2024-07-15 17:28:30.577142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.740 [2024-07-15 17:28:30.577233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.740 [2024-07-15 17:28:30.577275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.190 ms 00:22:19.740 [2024-07-15 17:28:30.577303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.740 [2024-07-15 17:28:30.577527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.740 [2024-07-15 17:28:30.577553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:19.740 [2024-07-15 17:28:30.577578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:19.740 [2024-07-15 17:28:30.577593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.594533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.594594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.998 [2024-07-15 17:28:30.594627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.789 ms 00:22:19.998 [2024-07-15 17:28:30.594640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.594709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.594732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.998 [2024-07-15 17:28:30.594746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:19.998 [2024-07-15 17:28:30.594758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.595650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.595697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.998 [2024-07-15 17:28:30.595715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:22:19.998 [2024-07-15 17:28:30.595727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.595969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.595988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.998 [2024-07-15 17:28:30.596005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:22:19.998 [2024-07-15 17:28:30.596017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.606254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.606304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.998 [2024-07-15 17:28:30.606337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.209 ms 00:22:19.998 [2024-07-15 17:28:30.606349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.610184] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:19.998 [2024-07-15 17:28:30.610227] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:19.998 [2024-07-15 17:28:30.610262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.610280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:19.998 [2024-07-15 17:28:30.610293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:22:19.998 [2024-07-15 17:28:30.610304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.626058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.626104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:19.998 [2024-07-15 17:28:30.626136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.709 ms 00:22:19.998 [2024-07-15 17:28:30.626175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.628266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.628304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:19.998 [2024-07-15 17:28:30.628336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:22:19.998 [2024-07-15 17:28:30.628348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.630069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.630107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:19.998 [2024-07-15 17:28:30.630138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:22:19.998 [2024-07-15 17:28:30.630149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.998 [2024-07-15 17:28:30.630603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.998 [2024-07-15 17:28:30.630632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:19.999 [2024-07-15 17:28:30.630651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:22:19.999 [2024-07-15 17:28:30.630663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.660366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.660501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:19.999 [2024-07-15 17:28:30.660541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.659 ms 00:22:19.999 [2024-07-15 17:28:30.660555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.668465] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:19.999 [2024-07-15 17:28:30.671527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.671562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:19.999 [2024-07-15 17:28:30.671595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.888 ms 00:22:19.999 [2024-07-15 17:28:30.671607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.671714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.671734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:19.999 [2024-07-15 17:28:30.671754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:19.999 [2024-07-15 17:28:30.671766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.674334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.674428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:19.999 [2024-07-15 17:28:30.674446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:22:19.999 [2024-07-15 17:28:30.674458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.674499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.674515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:19.999 [2024-07-15 17:28:30.674528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:19.999 [2024-07-15 17:28:30.674540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.674602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:19.999 [2024-07-15 17:28:30.674624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.674636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:19.999 [2024-07-15 17:28:30.674649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:19.999 [2024-07-15 17:28:30.674661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.679439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.679478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:19.999 [2024-07-15 17:28:30.679494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:22:19.999 [2024-07-15 17:28:30.679506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.679591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.999 [2024-07-15 17:28:30.679627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:19.999 [2024-07-15 17:28:30.679649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:19.999 [2024-07-15 17:28:30.679662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.999 [2024-07-15 17:28:30.686766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 164.661 ms, result 0 00:23:02.469  Copying: 23/1024 [MB] (23 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 100/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (26 MBps) Copying: 151/1024 [MB] (25 MBps) Copying: 176/1024 [MB] (24 MBps) Copying: 200/1024 [MB] (24 MBps) Copying: 225/1024 [MB] (24 MBps) Copying: 248/1024 [MB] (23 MBps) Copying: 273/1024 [MB] (24 MBps) Copying: 296/1024 [MB] (23 MBps) Copying: 321/1024 [MB] (24 MBps) Copying: 346/1024 [MB] (24 MBps) Copying: 370/1024 [MB] (24 MBps) Copying: 395/1024 [MB] (24 MBps) Copying: 419/1024 [MB] (24 MBps) Copying: 443/1024 [MB] (24 MBps) Copying: 467/1024 [MB] (23 MBps) Copying: 492/1024 [MB] (25 MBps) Copying: 516/1024 [MB] (24 MBps) Copying: 540/1024 [MB] (24 MBps) Copying: 565/1024 [MB] (24 MBps) Copying: 588/1024 [MB] (22 MBps) Copying: 611/1024 [MB] (23 MBps) Copying: 634/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (22 MBps) Copying: 703/1024 [MB] (22 MBps) Copying: 727/1024 [MB] (24 MBps) Copying: 752/1024 [MB] (24 MBps) Copying: 776/1024 [MB] (24 MBps) Copying: 799/1024 [MB] (22 MBps) Copying: 821/1024 [MB] (22 MBps) Copying: 845/1024 [MB] (23 MBps) Copying: 870/1024 [MB] (24 MBps) Copying: 893/1024 [MB] (23 MBps) Copying: 917/1024 [MB] (23 MBps) Copying: 941/1024 [MB] (24 MBps) Copying: 966/1024 [MB] (24 MBps) Copying: 992/1024 [MB] (25 MBps) Copying: 1017/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 17:29:13.220929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.469 [2024-07-15 17:29:13.221391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.469 [2024-07-15 17:29:13.221583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:02.469 [2024-07-15 17:29:13.221735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.469 [2024-07-15 17:29:13.221832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.469 [2024-07-15 17:29:13.223276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.469 [2024-07-15 17:29:13.223497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.469 [2024-07-15 17:29:13.223655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:23:02.469 [2024-07-15 17:29:13.223784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.469 [2024-07-15 17:29:13.224158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.470 [2024-07-15 17:29:13.224310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.470 [2024-07-15 17:29:13.224490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:23:02.470 [2024-07-15 17:29:13.224636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.470 [2024-07-15 17:29:13.231589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.470 [2024-07-15 17:29:13.231818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.470 [2024-07-15 17:29:13.232013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:23:02.470 [2024-07-15 17:29:13.232076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.470 [2024-07-15 17:29:13.241258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.470 [2024-07-15 17:29:13.241504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.470 [2024-07-15 17:29:13.241655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.052 ms 00:23:02.470 [2024-07-15 17:29:13.241771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.470 [2024-07-15 17:29:13.244049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.470 [2024-07-15 17:29:13.244100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.470 [2024-07-15 17:29:13.244121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:23:02.470 [2024-07-15 17:29:13.244136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.470 [2024-07-15 17:29:13.248197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.470 [2024-07-15 17:29:13.248272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.470 [2024-07-15 17:29:13.248294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:23:02.470 [2024-07-15 17:29:13.248325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.369535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.727 [2024-07-15 17:29:13.369670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.727 [2024-07-15 17:29:13.369695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.122 ms 00:23:02.727 [2024-07-15 17:29:13.369710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.372652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.727 [2024-07-15 17:29:13.372694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:02.727 [2024-07-15 17:29:13.372711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:23:02.727 [2024-07-15 17:29:13.372723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.374395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.727 [2024-07-15 17:29:13.374433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:02.727 [2024-07-15 17:29:13.374449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:23:02.727 [2024-07-15 17:29:13.374461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.375665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.727 [2024-07-15 17:29:13.375716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.727 [2024-07-15 17:29:13.375733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:23:02.727 [2024-07-15 17:29:13.375745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.376840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.727 [2024-07-15 17:29:13.376879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.727 [2024-07-15 17:29:13.376894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:23:02.727 [2024-07-15 17:29:13.376924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.727 [2024-07-15 17:29:13.376964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.727 [2024-07-15 17:29:13.376989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:23:02.727 [2024-07-15 17:29:13.377006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.377986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.728 [2024-07-15 17:29:13.378634] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.728 [2024-07-15 17:29:13.378656] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20dc4d38-671b-47b6-9eb1-0791517c23db 00:23:02.728 [2024-07-15 17:29:13.378670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:23:02.728 [2024-07-15 17:29:13.378697] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14272 00:23:02.728 [2024-07-15 17:29:13.378719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 13312 00:23:02.728 [2024-07-15 17:29:13.378732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0721 00:23:02.728 [2024-07-15 17:29:13.378744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.728 [2024-07-15 17:29:13.378757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.728 [2024-07-15 17:29:13.378770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.728 [2024-07-15 17:29:13.378781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.728 [2024-07-15 17:29:13.378792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.728 [2024-07-15 17:29:13.378805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.728 [2024-07-15 17:29:13.378818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.728 [2024-07-15 17:29:13.378831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:23:02.728 [2024-07-15 17:29:13.378843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.381752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.728 [2024-07-15 17:29:13.381783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.728 [2024-07-15 17:29:13.381798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:23:02.728 [2024-07-15 17:29:13.381810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.382009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.728 [2024-07-15 17:29:13.382036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.728 [2024-07-15 17:29:13.382055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:23:02.728 [2024-07-15 17:29:13.382068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.392884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.393076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.728 [2024-07-15 17:29:13.393192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.393258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.393399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.393566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.728 [2024-07-15 17:29:13.393622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.393662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.393780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.393911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.728 [2024-07-15 17:29:13.393964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.394003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.394149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.394199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.728 [2024-07-15 17:29:13.394299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.394434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.415218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.415648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.728 [2024-07-15 17:29:13.415764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.415815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.430350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.430650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.728 [2024-07-15 17:29:13.430696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.430711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.430814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.430834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.728 [2024-07-15 17:29:13.430848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.430860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.430914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.430931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.728 [2024-07-15 17:29:13.430944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.430957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.431102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.431122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.728 [2024-07-15 17:29:13.431137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.431156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.431207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.431226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.728 [2024-07-15 17:29:13.431239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.431251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.431314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.431331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.728 [2024-07-15 17:29:13.431345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.431373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.431449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.728 [2024-07-15 17:29:13.431467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.728 [2024-07-15 17:29:13.431480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.728 [2024-07-15 17:29:13.431493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.728 [2024-07-15 17:29:13.431699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.720 ms, result 0 00:23:03.294 00:23:03.294 00:23:03.294 17:29:13 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.823 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.823 Process with pid 93344 is not found 00:23:05.823 Remove shared memory files 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 93344 00:23:05.823 17:29:16 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 93344 ']' 00:23:05.823 17:29:16 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 93344 00:23:05.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93344) - No such process 00:23:05.823 17:29:16 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 93344 is not found' 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:05.823 17:29:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:05.823 ************************************ 00:23:05.823 END TEST ftl_restore 00:23:05.823 ************************************ 00:23:05.823 00:23:05.823 real 3m19.053s 00:23:05.823 user 3m3.816s 00:23:05.823 sys 0m17.503s 00:23:05.823 17:29:16 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.823 17:29:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:05.823 17:29:16 ftl -- common/autotest_common.sh@1142 -- # return 0 00:23:05.823 17:29:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.823 17:29:16 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:05.823 17:29:16 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.823 17:29:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:05.823 ************************************ 00:23:05.823 START TEST ftl_dirty_shutdown 00:23:05.823 ************************************ 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.823 * Looking for test storage... 00:23:05.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=95433 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 95433 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 95433 ']' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.823 17:29:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.823 [2024-07-15 17:29:16.666515] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:23:05.823 [2024-07-15 17:29:16.666707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95433 ] 00:23:06.081 [2024-07-15 17:29:16.822628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:06.081 [2024-07-15 17:29:16.846036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.348 [2024-07-15 17:29:16.981309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:06.935 17:29:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:07.193 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:07.451 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:07.451 { 00:23:07.451 "name": "nvme0n1", 00:23:07.451 "aliases": [ 00:23:07.451 "c4b493b1-fb48-41f2-8b95-ff61b3273eaa" 00:23:07.451 ], 00:23:07.451 "product_name": "NVMe disk", 00:23:07.451 "block_size": 4096, 00:23:07.451 "num_blocks": 1310720, 00:23:07.451 "uuid": "c4b493b1-fb48-41f2-8b95-ff61b3273eaa", 00:23:07.451 "assigned_rate_limits": { 00:23:07.451 "rw_ios_per_sec": 0, 00:23:07.451 "rw_mbytes_per_sec": 0, 00:23:07.451 "r_mbytes_per_sec": 0, 00:23:07.451 "w_mbytes_per_sec": 0 00:23:07.451 }, 00:23:07.451 "claimed": true, 00:23:07.451 "claim_type": "read_many_write_one", 00:23:07.451 "zoned": false, 00:23:07.451 "supported_io_types": { 00:23:07.451 "read": true, 00:23:07.451 "write": true, 00:23:07.451 "unmap": true, 00:23:07.451 "flush": true, 00:23:07.451 "reset": true, 00:23:07.451 "nvme_admin": true, 00:23:07.451 "nvme_io": true, 00:23:07.451 "nvme_io_md": false, 00:23:07.451 "write_zeroes": true, 00:23:07.451 "zcopy": false, 00:23:07.451 "get_zone_info": false, 00:23:07.451 "zone_management": false, 00:23:07.451 "zone_append": false, 00:23:07.451 "compare": true, 00:23:07.451 "compare_and_write": false, 00:23:07.451 "abort": true, 00:23:07.451 "seek_hole": false, 00:23:07.451 "seek_data": false, 00:23:07.451 "copy": true, 00:23:07.451 "nvme_iov_md": false 00:23:07.451 }, 00:23:07.451 "driver_specific": { 00:23:07.451 "nvme": [ 00:23:07.451 { 00:23:07.451 "pci_address": "0000:00:11.0", 00:23:07.451 "trid": { 00:23:07.451 "trtype": "PCIe", 00:23:07.451 "traddr": "0000:00:11.0" 00:23:07.451 }, 00:23:07.451 "ctrlr_data": { 00:23:07.451 "cntlid": 0, 00:23:07.451 "vendor_id": "0x1b36", 00:23:07.451 "model_number": "QEMU NVMe Ctrl", 00:23:07.451 "serial_number": "12341", 00:23:07.451 "firmware_revision": "8.0.0", 00:23:07.451 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:07.451 "oacs": { 00:23:07.451 "security": 0, 00:23:07.451 "format": 1, 00:23:07.451 "firmware": 0, 00:23:07.451 "ns_manage": 1 00:23:07.451 }, 00:23:07.451 "multi_ctrlr": false, 00:23:07.451 "ana_reporting": false 00:23:07.451 }, 00:23:07.451 "vs": { 00:23:07.451 "nvme_version": "1.4" 00:23:07.451 }, 00:23:07.451 "ns_data": { 00:23:07.451 "id": 1, 00:23:07.451 "can_share": false 00:23:07.451 } 00:23:07.451 } 00:23:07.451 ], 00:23:07.451 "mp_policy": "active_passive" 00:23:07.451 } 00:23:07.451 } 00:23:07.451 ]' 00:23:07.451 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:07.709 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:07.967 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0fd2b964-0640-46c3-9bb7-c6cb8630dd9b 00:23:07.967 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:07.967 17:29:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fd2b964-0640-46c3-9bb7-c6cb8630dd9b 00:23:08.226 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:08.484 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d3944113-c1d4-4fb1-92e6-173107dad589 00:23:08.484 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d3944113-c1d4-4fb1-92e6-173107dad589 00:23:08.741 17:29:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.741 17:29:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:08.999 { 00:23:08.999 "name": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:08.999 "aliases": [ 00:23:08.999 "lvs/nvme0n1p0" 00:23:08.999 ], 00:23:08.999 "product_name": "Logical Volume", 00:23:08.999 "block_size": 4096, 00:23:08.999 "num_blocks": 26476544, 00:23:08.999 "uuid": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:08.999 "assigned_rate_limits": { 00:23:08.999 "rw_ios_per_sec": 0, 00:23:08.999 "rw_mbytes_per_sec": 0, 00:23:08.999 "r_mbytes_per_sec": 0, 00:23:08.999 "w_mbytes_per_sec": 0 00:23:08.999 }, 00:23:08.999 "claimed": false, 00:23:08.999 "zoned": false, 00:23:08.999 "supported_io_types": { 00:23:08.999 "read": true, 00:23:08.999 "write": true, 00:23:08.999 "unmap": true, 00:23:08.999 "flush": false, 00:23:08.999 "reset": true, 00:23:08.999 "nvme_admin": false, 00:23:08.999 "nvme_io": false, 00:23:08.999 "nvme_io_md": false, 00:23:08.999 "write_zeroes": true, 00:23:08.999 "zcopy": false, 00:23:08.999 "get_zone_info": false, 00:23:08.999 "zone_management": false, 00:23:08.999 "zone_append": false, 00:23:08.999 "compare": false, 00:23:08.999 "compare_and_write": false, 00:23:08.999 "abort": false, 00:23:08.999 "seek_hole": true, 00:23:08.999 "seek_data": true, 00:23:08.999 "copy": false, 00:23:08.999 "nvme_iov_md": false 00:23:08.999 }, 00:23:08.999 "driver_specific": { 00:23:08.999 "lvol": { 00:23:08.999 "lvol_store_uuid": "d3944113-c1d4-4fb1-92e6-173107dad589", 00:23:08.999 "base_bdev": "nvme0n1", 00:23:08.999 "thin_provision": true, 00:23:08.999 "num_allocated_clusters": 0, 00:23:08.999 "snapshot": false, 00:23:08.999 "clone": false, 00:23:08.999 "esnap_clone": false 00:23:08.999 } 00:23:08.999 } 00:23:08.999 } 00:23:08.999 ]' 00:23:08.999 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:09.258 17:29:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:09.515 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:09.773 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:09.773 { 00:23:09.773 "name": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:09.773 "aliases": [ 00:23:09.773 "lvs/nvme0n1p0" 00:23:09.773 ], 00:23:09.773 "product_name": "Logical Volume", 00:23:09.773 "block_size": 4096, 00:23:09.773 "num_blocks": 26476544, 00:23:09.773 "uuid": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:09.773 "assigned_rate_limits": { 00:23:09.773 "rw_ios_per_sec": 0, 00:23:09.773 "rw_mbytes_per_sec": 0, 00:23:09.773 "r_mbytes_per_sec": 0, 00:23:09.773 "w_mbytes_per_sec": 0 00:23:09.773 }, 00:23:09.773 "claimed": false, 00:23:09.773 "zoned": false, 00:23:09.773 "supported_io_types": { 00:23:09.773 "read": true, 00:23:09.773 "write": true, 00:23:09.773 "unmap": true, 00:23:09.773 "flush": false, 00:23:09.773 "reset": true, 00:23:09.773 "nvme_admin": false, 00:23:09.773 "nvme_io": false, 00:23:09.773 "nvme_io_md": false, 00:23:09.773 "write_zeroes": true, 00:23:09.773 "zcopy": false, 00:23:09.773 "get_zone_info": false, 00:23:09.773 "zone_management": false, 00:23:09.773 "zone_append": false, 00:23:09.773 "compare": false, 00:23:09.773 "compare_and_write": false, 00:23:09.773 "abort": false, 00:23:09.773 "seek_hole": true, 00:23:09.773 "seek_data": true, 00:23:09.773 "copy": false, 00:23:09.773 "nvme_iov_md": false 00:23:09.773 }, 00:23:09.773 "driver_specific": { 00:23:09.773 "lvol": { 00:23:09.773 "lvol_store_uuid": "d3944113-c1d4-4fb1-92e6-173107dad589", 00:23:09.773 "base_bdev": "nvme0n1", 00:23:09.773 "thin_provision": true, 00:23:09.773 "num_allocated_clusters": 0, 00:23:09.773 "snapshot": false, 00:23:09.773 "clone": false, 00:23:09.773 "esnap_clone": false 00:23:09.773 } 00:23:09.773 } 00:23:09.773 } 00:23:09.773 ]' 00:23:09.773 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:09.773 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:10.031 17:29:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:10.289 17:29:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:10.548 { 00:23:10.548 "name": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:10.548 "aliases": [ 00:23:10.548 "lvs/nvme0n1p0" 00:23:10.548 ], 00:23:10.548 "product_name": "Logical Volume", 00:23:10.548 "block_size": 4096, 00:23:10.548 "num_blocks": 26476544, 00:23:10.548 "uuid": "f9a1ea41-0aa7-4c1d-b13b-43f5597d4203", 00:23:10.548 "assigned_rate_limits": { 00:23:10.548 "rw_ios_per_sec": 0, 00:23:10.548 "rw_mbytes_per_sec": 0, 00:23:10.548 "r_mbytes_per_sec": 0, 00:23:10.548 "w_mbytes_per_sec": 0 00:23:10.548 }, 00:23:10.548 "claimed": false, 00:23:10.548 "zoned": false, 00:23:10.548 "supported_io_types": { 00:23:10.548 "read": true, 00:23:10.548 "write": true, 00:23:10.548 "unmap": true, 00:23:10.548 "flush": false, 00:23:10.548 "reset": true, 00:23:10.548 "nvme_admin": false, 00:23:10.548 "nvme_io": false, 00:23:10.548 "nvme_io_md": false, 00:23:10.548 "write_zeroes": true, 00:23:10.548 "zcopy": false, 00:23:10.548 "get_zone_info": false, 00:23:10.548 "zone_management": false, 00:23:10.548 "zone_append": false, 00:23:10.548 "compare": false, 00:23:10.548 "compare_and_write": false, 00:23:10.548 "abort": false, 00:23:10.548 "seek_hole": true, 00:23:10.548 "seek_data": true, 00:23:10.548 "copy": false, 00:23:10.548 "nvme_iov_md": false 00:23:10.548 }, 00:23:10.548 "driver_specific": { 00:23:10.548 "lvol": { 00:23:10.548 "lvol_store_uuid": "d3944113-c1d4-4fb1-92e6-173107dad589", 00:23:10.548 "base_bdev": "nvme0n1", 00:23:10.548 "thin_provision": true, 00:23:10.548 "num_allocated_clusters": 0, 00:23:10.548 "snapshot": false, 00:23:10.548 "clone": false, 00:23:10.548 "esnap_clone": false 00:23:10.548 } 00:23:10.548 } 00:23:10.548 } 00:23:10.548 ]' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 --l2p_dram_limit 10' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:10.548 17:29:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f9a1ea41-0aa7-4c1d-b13b-43f5597d4203 --l2p_dram_limit 10 -c nvc0n1p0 00:23:10.807 [2024-07-15 17:29:21.548118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.548196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.807 [2024-07-15 17:29:21.548220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:10.807 [2024-07-15 17:29:21.548236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.548334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.548397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.807 [2024-07-15 17:29:21.548416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:10.807 [2024-07-15 17:29:21.548445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.548492] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.807 [2024-07-15 17:29:21.548952] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.807 [2024-07-15 17:29:21.548985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.549003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.807 [2024-07-15 17:29:21.549017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:23:10.807 [2024-07-15 17:29:21.549032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.549204] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ad4aa91-20f1-4a5d-9054-892030a0a588 00:23:10.807 [2024-07-15 17:29:21.551734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.551776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:10.807 [2024-07-15 17:29:21.551797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:10.807 [2024-07-15 17:29:21.551810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.566160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.566223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.807 [2024-07-15 17:29:21.566249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.252 ms 00:23:10.807 [2024-07-15 17:29:21.566263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.566452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.566474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.807 [2024-07-15 17:29:21.566494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:10.807 [2024-07-15 17:29:21.566506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.566622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.566641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:10.807 [2024-07-15 17:29:21.566658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:10.807 [2024-07-15 17:29:21.566680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.566723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.807 [2024-07-15 17:29:21.569848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.569906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.807 [2024-07-15 17:29:21.569949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:23:10.807 [2024-07-15 17:29:21.569965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.570029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.570049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:10.807 [2024-07-15 17:29:21.570063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:10.807 [2024-07-15 17:29:21.570081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.807 [2024-07-15 17:29:21.570120] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:10.807 [2024-07-15 17:29:21.570312] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:10.807 [2024-07-15 17:29:21.570331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:10.807 [2024-07-15 17:29:21.570351] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:10.807 [2024-07-15 17:29:21.570382] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:10.807 [2024-07-15 17:29:21.570400] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:10.807 [2024-07-15 17:29:21.570440] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:10.807 [2024-07-15 17:29:21.570476] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:10.807 [2024-07-15 17:29:21.570500] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:10.807 [2024-07-15 17:29:21.570524] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:10.807 [2024-07-15 17:29:21.570538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.807 [2024-07-15 17:29:21.570553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:10.807 [2024-07-15 17:29:21.570575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:23:10.808 [2024-07-15 17:29:21.570591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.808 [2024-07-15 17:29:21.570686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.808 [2024-07-15 17:29:21.570721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:10.808 [2024-07-15 17:29:21.570735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:10.808 [2024-07-15 17:29:21.570754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.808 [2024-07-15 17:29:21.570901] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:10.808 [2024-07-15 17:29:21.570924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:10.808 [2024-07-15 17:29:21.570938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.808 [2024-07-15 17:29:21.570955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.570972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:10.808 [2024-07-15 17:29:21.570986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.570998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:10.808 [2024-07-15 17:29:21.571023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.808 [2024-07-15 17:29:21.571048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:10.808 [2024-07-15 17:29:21.571062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:10.808 [2024-07-15 17:29:21.571072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.808 [2024-07-15 17:29:21.571089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:10.808 [2024-07-15 17:29:21.571101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:10.808 [2024-07-15 17:29:21.571114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:10.808 [2024-07-15 17:29:21.571140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:10.808 [2024-07-15 17:29:21.571175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:10.808 [2024-07-15 17:29:21.571213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:10.808 [2024-07-15 17:29:21.571253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:10.808 [2024-07-15 17:29:21.571297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:10.808 [2024-07-15 17:29:21.571334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.808 [2024-07-15 17:29:21.571359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:10.808 [2024-07-15 17:29:21.571388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:10.808 [2024-07-15 17:29:21.571403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.808 [2024-07-15 17:29:21.571418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:10.808 [2024-07-15 17:29:21.571429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:10.808 [2024-07-15 17:29:21.571443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:10.808 [2024-07-15 17:29:21.571479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:10.808 [2024-07-15 17:29:21.571490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571508] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:10.808 [2024-07-15 17:29:21.571520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:10.808 [2024-07-15 17:29:21.571548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.808 [2024-07-15 17:29:21.571599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:10.808 [2024-07-15 17:29:21.571611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:10.808 [2024-07-15 17:29:21.571625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:10.808 [2024-07-15 17:29:21.571636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:10.808 [2024-07-15 17:29:21.571651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:10.808 [2024-07-15 17:29:21.571663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:10.808 [2024-07-15 17:29:21.571683] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:10.808 [2024-07-15 17:29:21.571698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:10.808 [2024-07-15 17:29:21.571729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:10.808 [2024-07-15 17:29:21.571745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:10.808 [2024-07-15 17:29:21.571757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:10.808 [2024-07-15 17:29:21.571772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:10.808 [2024-07-15 17:29:21.571783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:10.808 [2024-07-15 17:29:21.571801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:10.808 [2024-07-15 17:29:21.571814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:10.808 [2024-07-15 17:29:21.571829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:10.808 [2024-07-15 17:29:21.571841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:10.808 [2024-07-15 17:29:21.571910] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:10.808 [2024-07-15 17:29:21.571930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:10.808 [2024-07-15 17:29:21.571958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:10.808 [2024-07-15 17:29:21.571973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:10.808 [2024-07-15 17:29:21.571985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:10.808 [2024-07-15 17:29:21.572007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.808 [2024-07-15 17:29:21.572028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:10.808 [2024-07-15 17:29:21.572049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:23:10.808 [2024-07-15 17:29:21.572061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.808 [2024-07-15 17:29:21.572137] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:10.808 [2024-07-15 17:29:21.572163] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:14.137 [2024-07-15 17:29:24.516801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.137 [2024-07-15 17:29:24.516904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:14.137 [2024-07-15 17:29:24.516949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2944.658 ms 00:23:14.137 [2024-07-15 17:29:24.516963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.137 [2024-07-15 17:29:24.539030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.137 [2024-07-15 17:29:24.539122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.138 [2024-07-15 17:29:24.539146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.909 ms 00:23:14.138 [2024-07-15 17:29:24.539165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.539449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.539470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.138 [2024-07-15 17:29:24.539488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:23:14.138 [2024-07-15 17:29:24.539501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.558652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.558701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.138 [2024-07-15 17:29:24.558723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.071 ms 00:23:14.138 [2024-07-15 17:29:24.558741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.558838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.558854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.138 [2024-07-15 17:29:24.558871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:14.138 [2024-07-15 17:29:24.558900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.559875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.559899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.138 [2024-07-15 17:29:24.559916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:23:14.138 [2024-07-15 17:29:24.559928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.560120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.560135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.138 [2024-07-15 17:29:24.560152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:14.138 [2024-07-15 17:29:24.560171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.573840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.573879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.138 [2024-07-15 17:29:24.573899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.639 ms 00:23:14.138 [2024-07-15 17:29:24.573911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.584679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:14.138 [2024-07-15 17:29:24.590344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.590432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:14.138 [2024-07-15 17:29:24.590451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.323 ms 00:23:14.138 [2024-07-15 17:29:24.590466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.660592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.660715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:14.138 [2024-07-15 17:29:24.660738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.087 ms 00:23:14.138 [2024-07-15 17:29:24.660758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.661002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.661040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:14.138 [2024-07-15 17:29:24.661053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:23:14.138 [2024-07-15 17:29:24.661068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.664939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.664997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:14.138 [2024-07-15 17:29:24.665028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:23:14.138 [2024-07-15 17:29:24.665043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.668294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.668338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:14.138 [2024-07-15 17:29:24.668356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:23:14.138 [2024-07-15 17:29:24.668404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.668900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.668932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:14.138 [2024-07-15 17:29:24.668947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:23:14.138 [2024-07-15 17:29:24.668964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.710779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.710871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:14.138 [2024-07-15 17:29:24.710897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.786 ms 00:23:14.138 [2024-07-15 17:29:24.710913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.717045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.717089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:14.138 [2024-07-15 17:29:24.717106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.084 ms 00:23:14.138 [2024-07-15 17:29:24.717132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.720835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.720877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:14.138 [2024-07-15 17:29:24.720893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:23:14.138 [2024-07-15 17:29:24.720906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.725026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.725070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:14.138 [2024-07-15 17:29:24.725087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.079 ms 00:23:14.138 [2024-07-15 17:29:24.725104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.725163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.725187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:14.138 [2024-07-15 17:29:24.725208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:14.138 [2024-07-15 17:29:24.725229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.725386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.138 [2024-07-15 17:29:24.725412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:14.138 [2024-07-15 17:29:24.725427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:14.138 [2024-07-15 17:29:24.725445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.138 [2024-07-15 17:29:24.727351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3178.483 ms, result 0 00:23:14.138 { 00:23:14.138 "name": "ftl0", 00:23:14.138 "uuid": "4ad4aa91-20f1-4a5d-9054-892030a0a588" 00:23:14.138 } 00:23:14.138 17:29:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:14.138 17:29:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:14.396 17:29:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:14.396 17:29:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:14.396 17:29:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:14.654 /dev/nbd0 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:14.654 1+0 records in 00:23:14.654 1+0 records out 00:23:14.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111093 s, 3.7 MB/s 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:23:14.654 17:29:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:14.654 [2024-07-15 17:29:25.428922] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:23:14.654 [2024-07-15 17:29:25.429130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95571 ] 00:23:14.912 [2024-07-15 17:29:25.588316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:14.912 [2024-07-15 17:29:25.603158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.912 [2024-07-15 17:29:25.704494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.171  Copying: 156/1024 [MB] (156 MBps) Copying: 314/1024 [MB] (158 MBps) Copying: 471/1024 [MB] (156 MBps) Copying: 625/1024 [MB] (153 MBps) Copying: 774/1024 [MB] (148 MBps) Copying: 924/1024 [MB] (150 MBps) Copying: 1024/1024 [MB] (average 154 MBps) 00:23:22.171 00:23:22.171 17:29:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:24.713 17:29:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:24.713 [2024-07-15 17:29:35.235811] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:23:24.713 [2024-07-15 17:29:35.236066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95675 ] 00:23:24.713 [2024-07-15 17:29:35.393639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:24.713 [2024-07-15 17:29:35.415617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.713 [2024-07-15 17:29:35.532817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.109  Copying: 15/1024 [MB] (15 MBps) Copying: 31/1024 [MB] (16 MBps) Copying: 44/1024 [MB] (12 MBps) Copying: 60/1024 [MB] (16 MBps) Copying: 76/1024 [MB] (16 MBps) Copying: 93/1024 [MB] (16 MBps) Copying: 110/1024 [MB] (16 MBps) Copying: 124/1024 [MB] (13 MBps) Copying: 137/1024 [MB] (13 MBps) Copying: 151/1024 [MB] (13 MBps) Copying: 165/1024 [MB] (14 MBps) Copying: 180/1024 [MB] (15 MBps) Copying: 197/1024 [MB] (16 MBps) Copying: 213/1024 [MB] (16 MBps) Copying: 230/1024 [MB] (16 MBps) Copying: 246/1024 [MB] (16 MBps) Copying: 261/1024 [MB] (15 MBps) Copying: 278/1024 [MB] (16 MBps) Copying: 294/1024 [MB] (15 MBps) Copying: 311/1024 [MB] (16 MBps) Copying: 327/1024 [MB] (16 MBps) Copying: 344/1024 [MB] (16 MBps) Copying: 361/1024 [MB] (16 MBps) Copying: 377/1024 [MB] (16 MBps) Copying: 393/1024 [MB] (16 MBps) Copying: 410/1024 [MB] (16 MBps) Copying: 426/1024 [MB] (16 MBps) Copying: 442/1024 [MB] (16 MBps) Copying: 457/1024 [MB] (14 MBps) Copying: 474/1024 [MB] (16 MBps) Copying: 489/1024 [MB] (15 MBps) Copying: 506/1024 [MB] (16 MBps) Copying: 522/1024 [MB] (15 MBps) Copying: 538/1024 [MB] (16 MBps) Copying: 555/1024 [MB] (16 MBps) Copying: 572/1024 [MB] (16 MBps) Copying: 589/1024 [MB] (17 MBps) Copying: 606/1024 [MB] (17 MBps) Copying: 623/1024 [MB] (16 MBps) Copying: 639/1024 [MB] (16 MBps) Copying: 655/1024 [MB] (15 MBps) Copying: 672/1024 [MB] (16 MBps) Copying: 689/1024 [MB] (17 MBps) Copying: 706/1024 [MB] (16 MBps) Copying: 723/1024 [MB] (17 MBps) Copying: 740/1024 [MB] (17 MBps) Copying: 757/1024 [MB] (17 MBps) Copying: 773/1024 [MB] (16 MBps) Copying: 790/1024 [MB] (16 MBps) Copying: 808/1024 [MB] (17 MBps) Copying: 824/1024 [MB] (16 MBps) Copying: 841/1024 [MB] (16 MBps) Copying: 858/1024 [MB] (17 MBps) Copying: 875/1024 [MB] (17 MBps) Copying: 892/1024 [MB] (16 MBps) Copying: 909/1024 [MB] (17 MBps) Copying: 926/1024 [MB] (17 MBps) Copying: 943/1024 [MB] (16 MBps) Copying: 960/1024 [MB] (16 MBps) Copying: 977/1024 [MB] (17 MBps) Copying: 994/1024 [MB] (17 MBps) Copying: 1011/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:24:28.109 00:24:28.109 17:30:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:28.109 17:30:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:28.366 17:30:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:28.627 [2024-07-15 17:30:39.308463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.308550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.627 [2024-07-15 17:30:39.308611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:28.627 [2024-07-15 17:30:39.308632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.308709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.627 [2024-07-15 17:30:39.310261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.310320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.627 [2024-07-15 17:30:39.310347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:24:28.627 [2024-07-15 17:30:39.310393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.312541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.312613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.627 [2024-07-15 17:30:39.312643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:24:28.627 [2024-07-15 17:30:39.312672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.330057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.330125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.627 [2024-07-15 17:30:39.330158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.341 ms 00:24:28.627 [2024-07-15 17:30:39.330188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.337215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.337288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.627 [2024-07-15 17:30:39.337315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:24:28.627 [2024-07-15 17:30:39.337392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.339066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.339150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.627 [2024-07-15 17:30:39.339179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:24:28.627 [2024-07-15 17:30:39.339209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.344218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.344298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.627 [2024-07-15 17:30:39.344328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.927 ms 00:24:28.627 [2024-07-15 17:30:39.344381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.344586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.344625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.627 [2024-07-15 17:30:39.344651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:28.627 [2024-07-15 17:30:39.344678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.346671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.346745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:28.627 [2024-07-15 17:30:39.346773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:24:28.627 [2024-07-15 17:30:39.346800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.348499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.348559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:28.627 [2024-07-15 17:30:39.348588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:24:28.627 [2024-07-15 17:30:39.348616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.350020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.350082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.627 [2024-07-15 17:30:39.350110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:24:28.627 [2024-07-15 17:30:39.350138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.351581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-07-15 17:30:39.351637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.627 [2024-07-15 17:30:39.351665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:24:28.627 [2024-07-15 17:30:39.351696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-07-15 17:30:39.351761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.627 [2024-07-15 17:30:39.351805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.351968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.627 [2024-07-15 17:30:39.352002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.352974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.353970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.628 [2024-07-15 17:30:39.354669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.628 [2024-07-15 17:30:39.354698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ad4aa91-20f1-4a5d-9054-892030a0a588 00:24:28.628 [2024-07-15 17:30:39.354726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:28.628 [2024-07-15 17:30:39.354763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:28.628 [2024-07-15 17:30:39.354795] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:28.628 [2024-07-15 17:30:39.354822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:28.629 [2024-07-15 17:30:39.354851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.629 [2024-07-15 17:30:39.354876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.629 [2024-07-15 17:30:39.354897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.629 [2024-07-15 17:30:39.354915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.629 [2024-07-15 17:30:39.354942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.629 [2024-07-15 17:30:39.354966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.629 [2024-07-15 17:30:39.355007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.629 [2024-07-15 17:30:39.355026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:24:28.629 [2024-07-15 17:30:39.355047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.358520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.629 [2024-07-15 17:30:39.358578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.629 [2024-07-15 17:30:39.358606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:24:28.629 [2024-07-15 17:30:39.358635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.358886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.629 [2024-07-15 17:30:39.358935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.629 [2024-07-15 17:30:39.358959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:24:28.629 [2024-07-15 17:30:39.358988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.371912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.371977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.629 [2024-07-15 17:30:39.372005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.372033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.372163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.372199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.629 [2024-07-15 17:30:39.372236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.372264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.372470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.372516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.629 [2024-07-15 17:30:39.372543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.372569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.372616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.372655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.629 [2024-07-15 17:30:39.372678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.372705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.392773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.392884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.629 [2024-07-15 17:30:39.392915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.392942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.408414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.408533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.629 [2024-07-15 17:30:39.408565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.408591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.408779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.408829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.629 [2024-07-15 17:30:39.408858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.408881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.408984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.409032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.629 [2024-07-15 17:30:39.409061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.409089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.409259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.409300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.629 [2024-07-15 17:30:39.409325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.409388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.409485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.409523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.629 [2024-07-15 17:30:39.409548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.409582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.409671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.409711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.629 [2024-07-15 17:30:39.409735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.409761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.409868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.629 [2024-07-15 17:30:39.409924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.629 [2024-07-15 17:30:39.409956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.629 [2024-07-15 17:30:39.409986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.629 [2024-07-15 17:30:39.410323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 101.774 ms, result 0 00:24:28.629 true 00:24:28.629 17:30:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 95433 00:24:28.629 17:30:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid95433 00:24:28.629 17:30:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:28.896 [2024-07-15 17:30:39.545823] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:24:28.896 [2024-07-15 17:30:39.546076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96312 ] 00:24:28.896 [2024-07-15 17:30:39.704636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:28.896 [2024-07-15 17:30:39.726732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.161 [2024-07-15 17:30:39.845953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.021  Copying: 156/1024 [MB] (156 MBps) Copying: 313/1024 [MB] (156 MBps) Copying: 470/1024 [MB] (157 MBps) Copying: 631/1024 [MB] (160 MBps) Copying: 798/1024 [MB] (167 MBps) Copying: 969/1024 [MB] (170 MBps) Copying: 1024/1024 [MB] (average 161 MBps) 00:24:36.021 00:24:36.021 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 95433 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:36.021 17:30:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:36.021 [2024-07-15 17:30:46.813349] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:24:36.021 [2024-07-15 17:30:46.813636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96387 ] 00:24:36.280 [2024-07-15 17:30:46.968804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:36.280 [2024-07-15 17:30:46.992442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.280 [2024-07-15 17:30:47.115441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.537 [2024-07-15 17:30:47.283559] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:36.537 [2024-07-15 17:30:47.283649] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:36.537 [2024-07-15 17:30:47.349911] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:36.537 [2024-07-15 17:30:47.350288] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:36.537 [2024-07-15 17:30:47.350514] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:36.796 [2024-07-15 17:30:47.613261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.613330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:36.796 [2024-07-15 17:30:47.613413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:36.796 [2024-07-15 17:30:47.613439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.613526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.613553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.796 [2024-07-15 17:30:47.613567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:36.796 [2024-07-15 17:30:47.613578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.613609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:36.796 [2024-07-15 17:30:47.613914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:36.796 [2024-07-15 17:30:47.613947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.613960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.796 [2024-07-15 17:30:47.613974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:24:36.796 [2024-07-15 17:30:47.613985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.616536] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:36.796 [2024-07-15 17:30:47.620035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.620077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:36.796 [2024-07-15 17:30:47.620106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:24:36.796 [2024-07-15 17:30:47.620119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.620213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.620232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:36.796 [2024-07-15 17:30:47.620246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:36.796 [2024-07-15 17:30:47.620272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.632386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.632440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.796 [2024-07-15 17:30:47.632459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.046 ms 00:24:36.796 [2024-07-15 17:30:47.632471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.632612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.632634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.796 [2024-07-15 17:30:47.632649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:36.796 [2024-07-15 17:30:47.632674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.632826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.632854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:36.796 [2024-07-15 17:30:47.632868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:36.796 [2024-07-15 17:30:47.632880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.632921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.796 [2024-07-15 17:30:47.635728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.635769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.796 [2024-07-15 17:30:47.635785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:24:36.796 [2024-07-15 17:30:47.635797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.635849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.635876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:36.796 [2024-07-15 17:30:47.635889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:36.796 [2024-07-15 17:30:47.635905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.635935] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:36.796 [2024-07-15 17:30:47.635979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:36.796 [2024-07-15 17:30:47.636038] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:36.796 [2024-07-15 17:30:47.636061] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:36.796 [2024-07-15 17:30:47.636170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:36.796 [2024-07-15 17:30:47.636187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:36.796 [2024-07-15 17:30:47.636222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:36.796 [2024-07-15 17:30:47.636249] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:36.796 [2024-07-15 17:30:47.636274] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:36.796 [2024-07-15 17:30:47.636288] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:36.796 [2024-07-15 17:30:47.636307] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:36.796 [2024-07-15 17:30:47.636327] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:36.796 [2024-07-15 17:30:47.636339] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:36.796 [2024-07-15 17:30:47.636381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.636396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:36.796 [2024-07-15 17:30:47.636414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:24:36.796 [2024-07-15 17:30:47.636435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.636529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.796 [2024-07-15 17:30:47.636543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:36.796 [2024-07-15 17:30:47.636554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:36.796 [2024-07-15 17:30:47.636565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.796 [2024-07-15 17:30:47.636718] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:36.796 [2024-07-15 17:30:47.636749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:36.796 [2024-07-15 17:30:47.636799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.796 [2024-07-15 17:30:47.636813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.796 [2024-07-15 17:30:47.636835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:36.796 [2024-07-15 17:30:47.636845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:36.796 [2024-07-15 17:30:47.636856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:36.797 [2024-07-15 17:30:47.636867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:36.797 [2024-07-15 17:30:47.636878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:36.797 [2024-07-15 17:30:47.636896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.797 [2024-07-15 17:30:47.636907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:36.797 [2024-07-15 17:30:47.636932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:36.797 [2024-07-15 17:30:47.636943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.797 [2024-07-15 17:30:47.636955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:36.797 [2024-07-15 17:30:47.636974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:36.797 [2024-07-15 17:30:47.636992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:36.797 [2024-07-15 17:30:47.637021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:36.797 [2024-07-15 17:30:47.637061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:36.797 [2024-07-15 17:30:47.637097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:36.797 [2024-07-15 17:30:47.637153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:36.797 [2024-07-15 17:30:47.637212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:36.797 [2024-07-15 17:30:47.637245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.797 [2024-07-15 17:30:47.637267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:36.797 [2024-07-15 17:30:47.637278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:36.797 [2024-07-15 17:30:47.637288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.797 [2024-07-15 17:30:47.637299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:36.797 [2024-07-15 17:30:47.637309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:36.797 [2024-07-15 17:30:47.637324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:36.797 [2024-07-15 17:30:47.637392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:36.797 [2024-07-15 17:30:47.637406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637416] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:36.797 [2024-07-15 17:30:47.637431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:36.797 [2024-07-15 17:30:47.637456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.797 [2024-07-15 17:30:47.637484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:36.797 [2024-07-15 17:30:47.637495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:36.797 [2024-07-15 17:30:47.637506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:36.797 [2024-07-15 17:30:47.637517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:36.797 [2024-07-15 17:30:47.637527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:36.797 [2024-07-15 17:30:47.637539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:36.797 [2024-07-15 17:30:47.637553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:36.797 [2024-07-15 17:30:47.637573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:36.797 [2024-07-15 17:30:47.637599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:36.797 [2024-07-15 17:30:47.637614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:36.797 [2024-07-15 17:30:47.637627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:36.797 [2024-07-15 17:30:47.637639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:36.797 [2024-07-15 17:30:47.637654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:36.797 [2024-07-15 17:30:47.637675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:36.797 [2024-07-15 17:30:47.637696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:36.797 [2024-07-15 17:30:47.637715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:36.797 [2024-07-15 17:30:47.637728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:36.797 [2024-07-15 17:30:47.637786] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:36.797 [2024-07-15 17:30:47.637800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.797 [2024-07-15 17:30:47.637824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:36.797 [2024-07-15 17:30:47.637840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:36.797 [2024-07-15 17:30:47.637859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:36.797 [2024-07-15 17:30:47.637882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.797 [2024-07-15 17:30:47.637901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:36.797 [2024-07-15 17:30:47.637914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:24:36.797 [2024-07-15 17:30:47.637926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.055 [2024-07-15 17:30:47.678888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.055 [2024-07-15 17:30:47.678987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.055 [2024-07-15 17:30:47.679018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.864 ms 00:24:37.055 [2024-07-15 17:30:47.679044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.055 [2024-07-15 17:30:47.679241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.055 [2024-07-15 17:30:47.679264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:37.055 [2024-07-15 17:30:47.679302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:37.055 [2024-07-15 17:30:47.679320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.055 [2024-07-15 17:30:47.698595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.055 [2024-07-15 17:30:47.698672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.055 [2024-07-15 17:30:47.698705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.094 ms 00:24:37.055 [2024-07-15 17:30:47.698730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.055 [2024-07-15 17:30:47.698833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.055 [2024-07-15 17:30:47.698868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.055 [2024-07-15 17:30:47.698885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:37.055 [2024-07-15 17:30:47.698905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.055 [2024-07-15 17:30:47.699854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.055 [2024-07-15 17:30:47.699885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.055 [2024-07-15 17:30:47.699901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:24:37.056 [2024-07-15 17:30:47.699934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.700168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.700206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.056 [2024-07-15 17:30:47.700235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:24:37.056 [2024-07-15 17:30:47.700258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.711881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.711940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.056 [2024-07-15 17:30:47.711965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.581 ms 00:24:37.056 [2024-07-15 17:30:47.711980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.716240] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:37.056 [2024-07-15 17:30:47.716290] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:37.056 [2024-07-15 17:30:47.716314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.716330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:37.056 [2024-07-15 17:30:47.716346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:24:37.056 [2024-07-15 17:30:47.716380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.736698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.736852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:37.056 [2024-07-15 17:30:47.736882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.205 ms 00:24:37.056 [2024-07-15 17:30:47.736907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.741170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.741218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:37.056 [2024-07-15 17:30:47.741238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:24:37.056 [2024-07-15 17:30:47.741252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.743074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.743116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:37.056 [2024-07-15 17:30:47.743135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:24:37.056 [2024-07-15 17:30:47.743151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.743781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.743813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:37.056 [2024-07-15 17:30:47.743832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:24:37.056 [2024-07-15 17:30:47.743846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.778829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.778936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:37.056 [2024-07-15 17:30:47.778963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.922 ms 00:24:37.056 [2024-07-15 17:30:47.779004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.790030] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:37.056 [2024-07-15 17:30:47.796391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.796455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:37.056 [2024-07-15 17:30:47.796495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.290 ms 00:24:37.056 [2024-07-15 17:30:47.796520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.796698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.796729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:37.056 [2024-07-15 17:30:47.796758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:37.056 [2024-07-15 17:30:47.796783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.796927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.796957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:37.056 [2024-07-15 17:30:47.796974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:37.056 [2024-07-15 17:30:47.796989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.797031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.797050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:37.056 [2024-07-15 17:30:47.797074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:37.056 [2024-07-15 17:30:47.797091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.797160] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:37.056 [2024-07-15 17:30:47.797182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.797196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:37.056 [2024-07-15 17:30:47.797212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:37.056 [2024-07-15 17:30:47.797235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.802944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.803004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:37.056 [2024-07-15 17:30:47.803037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.659 ms 00:24:37.056 [2024-07-15 17:30:47.803063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.803172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.056 [2024-07-15 17:30:47.803210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:37.056 [2024-07-15 17:30:47.803227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:37.056 [2024-07-15 17:30:47.803253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.056 [2024-07-15 17:30:47.805133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 191.168 ms, result 0 00:25:19.192  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 132/1024 [MB] (26 MBps) Copying: 158/1024 [MB] (26 MBps) Copying: 184/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (24 MBps) Copying: 234/1024 [MB] (25 MBps) Copying: 259/1024 [MB] (24 MBps) Copying: 285/1024 [MB] (25 MBps) Copying: 311/1024 [MB] (26 MBps) Copying: 337/1024 [MB] (26 MBps) Copying: 363/1024 [MB] (25 MBps) Copying: 389/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (22 MBps) Copying: 434/1024 [MB] (22 MBps) Copying: 456/1024 [MB] (22 MBps) Copying: 479/1024 [MB] (23 MBps) Copying: 502/1024 [MB] (22 MBps) Copying: 525/1024 [MB] (23 MBps) Copying: 549/1024 [MB] (23 MBps) Copying: 574/1024 [MB] (24 MBps) Copying: 599/1024 [MB] (24 MBps) Copying: 624/1024 [MB] (25 MBps) Copying: 648/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (24 MBps) Copying: 697/1024 [MB] (24 MBps) Copying: 721/1024 [MB] (23 MBps) Copying: 745/1024 [MB] (24 MBps) Copying: 771/1024 [MB] (26 MBps) Copying: 797/1024 [MB] (25 MBps) Copying: 823/1024 [MB] (25 MBps) Copying: 849/1024 [MB] (26 MBps) Copying: 874/1024 [MB] (25 MBps) Copying: 898/1024 [MB] (23 MBps) Copying: 922/1024 [MB] (24 MBps) Copying: 947/1024 [MB] (25 MBps) Copying: 972/1024 [MB] (24 MBps) Copying: 996/1024 [MB] (23 MBps) Copying: 1022/1024 [MB] (26 MBps) Copying: 1048412/1048576 [kB] (1120 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 17:31:29.992561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:29.992657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:19.192 [2024-07-15 17:31:29.992689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:19.192 [2024-07-15 17:31:29.992701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:29.994551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:19.192 [2024-07-15 17:31:29.998545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:29.998584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:19.192 [2024-07-15 17:31:29.998601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:25:19.192 [2024-07-15 17:31:29.998618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:30.009021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:30.009080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:19.192 [2024-07-15 17:31:30.009112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.828 ms 00:25:19.192 [2024-07-15 17:31:30.009125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:30.030206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:30.030246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:19.192 [2024-07-15 17:31:30.030262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.050 ms 00:25:19.192 [2024-07-15 17:31:30.030273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:30.035640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:30.035671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:19.192 [2024-07-15 17:31:30.035685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:25:19.192 [2024-07-15 17:31:30.035703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:30.036995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:30.037033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:19.192 [2024-07-15 17:31:30.037047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:25:19.192 [2024-07-15 17:31:30.037058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.192 [2024-07-15 17:31:30.041524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.192 [2024-07-15 17:31:30.041562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:19.192 [2024-07-15 17:31:30.041576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:25:19.192 [2024-07-15 17:31:30.041587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.145718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.452 [2024-07-15 17:31:30.145775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:19.452 [2024-07-15 17:31:30.145793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.092 ms 00:25:19.452 [2024-07-15 17:31:30.145815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.147849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.452 [2024-07-15 17:31:30.147885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:19.452 [2024-07-15 17:31:30.147901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:25:19.452 [2024-07-15 17:31:30.147911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.149432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.452 [2024-07-15 17:31:30.149465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:19.452 [2024-07-15 17:31:30.149478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:25:19.452 [2024-07-15 17:31:30.149488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.150768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.452 [2024-07-15 17:31:30.150812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:19.452 [2024-07-15 17:31:30.150849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:25:19.452 [2024-07-15 17:31:30.150860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.152064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.452 [2024-07-15 17:31:30.152100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:19.452 [2024-07-15 17:31:30.152113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:25:19.452 [2024-07-15 17:31:30.152123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.452 [2024-07-15 17:31:30.152156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:19.452 [2024-07-15 17:31:30.152177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127232 / 261120 wr_cnt: 1 state: open 00:25:19.452 [2024-07-15 17:31:30.152191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:19.452 [2024-07-15 17:31:30.152203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.152993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:19.453 [2024-07-15 17:31:30.153854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:19.454 [2024-07-15 17:31:30.153906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:19.454 [2024-07-15 17:31:30.153953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:19.454 [2024-07-15 17:31:30.154110] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:19.454 [2024-07-15 17:31:30.154261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ad4aa91-20f1-4a5d-9054-892030a0a588 00:25:19.454 [2024-07-15 17:31:30.154322] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127232 00:25:19.454 [2024-07-15 17:31:30.154387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128192 00:25:19.454 [2024-07-15 17:31:30.154566] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127232 00:25:19.454 [2024-07-15 17:31:30.154620] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:25:19.454 [2024-07-15 17:31:30.154671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:19.454 [2024-07-15 17:31:30.154704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:19.454 [2024-07-15 17:31:30.154746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:19.454 [2024-07-15 17:31:30.154847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:19.454 [2024-07-15 17:31:30.154891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:19.454 [2024-07-15 17:31:30.154925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.454 [2024-07-15 17:31:30.154959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:19.454 [2024-07-15 17:31:30.154992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:25:19.454 [2024-07-15 17:31:30.155076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.158033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.454 [2024-07-15 17:31:30.158175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:19.454 [2024-07-15 17:31:30.158198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:25:19.454 [2024-07-15 17:31:30.158235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.158439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.454 [2024-07-15 17:31:30.158457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:19.454 [2024-07-15 17:31:30.158482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:25:19.454 [2024-07-15 17:31:30.158493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.168523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.168566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.454 [2024-07-15 17:31:30.168587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.168598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.168654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.168669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.454 [2024-07-15 17:31:30.168680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.168691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.168777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.168795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.454 [2024-07-15 17:31:30.168807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.168823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.168844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.168876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.454 [2024-07-15 17:31:30.168887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.168897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.185804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.185873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.454 [2024-07-15 17:31:30.185896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.185917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.199869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.199934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.454 [2024-07-15 17:31:30.199951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.199962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.454 [2024-07-15 17:31:30.200091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.454 [2024-07-15 17:31:30.200182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.454 [2024-07-15 17:31:30.200340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:19.454 [2024-07-15 17:31:30.200476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.454 [2024-07-15 17:31:30.200564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.454 [2024-07-15 17:31:30.200653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.454 [2024-07-15 17:31:30.200665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.454 [2024-07-15 17:31:30.200675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.454 [2024-07-15 17:31:30.200849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.074 ms, result 0 00:25:20.387 00:25:20.387 00:25:20.387 17:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:22.912 17:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:22.912 [2024-07-15 17:31:33.372963] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:25:22.912 [2024-07-15 17:31:33.373238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96840 ] 00:25:22.912 [2024-07-15 17:31:33.525919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:22.912 [2024-07-15 17:31:33.547719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.912 [2024-07-15 17:31:33.684648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.169 [2024-07-15 17:31:33.861032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.169 [2024-07-15 17:31:33.861147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.427 [2024-07-15 17:31:34.026985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.427 [2024-07-15 17:31:34.027095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:23.427 [2024-07-15 17:31:34.027119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:23.428 [2024-07-15 17:31:34.027131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.027226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.027254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.428 [2024-07-15 17:31:34.027283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:23.428 [2024-07-15 17:31:34.027294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.027346] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:23.428 [2024-07-15 17:31:34.027703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:23.428 [2024-07-15 17:31:34.027732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.027746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.428 [2024-07-15 17:31:34.027760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:25:23.428 [2024-07-15 17:31:34.027772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.030915] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:23.428 [2024-07-15 17:31:34.034879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.034935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:23.428 [2024-07-15 17:31:34.034953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.966 ms 00:25:23.428 [2024-07-15 17:31:34.034966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.035059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.035080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:23.428 [2024-07-15 17:31:34.035094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:23.428 [2024-07-15 17:31:34.035105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.049645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.049725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.428 [2024-07-15 17:31:34.049756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:25:23.428 [2024-07-15 17:31:34.049769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.049931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.049952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.428 [2024-07-15 17:31:34.049972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:23.428 [2024-07-15 17:31:34.049985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.050103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.050130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:23.428 [2024-07-15 17:31:34.050155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:23.428 [2024-07-15 17:31:34.050168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.050218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:23.428 [2024-07-15 17:31:34.053345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.053431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.428 [2024-07-15 17:31:34.053448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:25:23.428 [2024-07-15 17:31:34.053460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.053532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.053551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:23.428 [2024-07-15 17:31:34.053566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:23.428 [2024-07-15 17:31:34.053578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.053623] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:23.428 [2024-07-15 17:31:34.053659] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:23.428 [2024-07-15 17:31:34.053714] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:23.428 [2024-07-15 17:31:34.053761] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:23.428 [2024-07-15 17:31:34.053880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:23.428 [2024-07-15 17:31:34.053921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:23.428 [2024-07-15 17:31:34.053963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:23.428 [2024-07-15 17:31:34.053979] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054008] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054022] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:23.428 [2024-07-15 17:31:34.054033] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:23.428 [2024-07-15 17:31:34.054045] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:23.428 [2024-07-15 17:31:34.054056] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:23.428 [2024-07-15 17:31:34.054069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.054085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:23.428 [2024-07-15 17:31:34.054098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:25:23.428 [2024-07-15 17:31:34.054109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.054200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.054217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:23.428 [2024-07-15 17:31:34.054229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:23.428 [2024-07-15 17:31:34.054241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.054366] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:23.428 [2024-07-15 17:31:34.054402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:23.428 [2024-07-15 17:31:34.054436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:23.428 [2024-07-15 17:31:34.054473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:23.428 [2024-07-15 17:31:34.054505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.428 [2024-07-15 17:31:34.054526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:23.428 [2024-07-15 17:31:34.054536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:23.428 [2024-07-15 17:31:34.054547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.428 [2024-07-15 17:31:34.054558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:23.428 [2024-07-15 17:31:34.054568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:23.428 [2024-07-15 17:31:34.054612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:23.428 [2024-07-15 17:31:34.054637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:23.428 [2024-07-15 17:31:34.054686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:23.428 [2024-07-15 17:31:34.054720] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:23.428 [2024-07-15 17:31:34.054753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:23.428 [2024-07-15 17:31:34.054787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.428 [2024-07-15 17:31:34.054808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:23.428 [2024-07-15 17:31:34.054819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.428 [2024-07-15 17:31:34.054845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:23.428 [2024-07-15 17:31:34.054857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:23.428 [2024-07-15 17:31:34.054876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.428 [2024-07-15 17:31:34.054888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:23.428 [2024-07-15 17:31:34.054899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:23.428 [2024-07-15 17:31:34.054911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:23.428 [2024-07-15 17:31:34.054932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:23.428 [2024-07-15 17:31:34.054943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.054955] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:23.428 [2024-07-15 17:31:34.054967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:23.428 [2024-07-15 17:31:34.054989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.428 [2024-07-15 17:31:34.055001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.428 [2024-07-15 17:31:34.055028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:23.428 [2024-07-15 17:31:34.055040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:23.428 [2024-07-15 17:31:34.055050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:23.428 [2024-07-15 17:31:34.055068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:23.428 [2024-07-15 17:31:34.055079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:23.428 [2024-07-15 17:31:34.055096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:23.428 [2024-07-15 17:31:34.055111] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:23.428 [2024-07-15 17:31:34.055125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:23.428 [2024-07-15 17:31:34.055152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:23.428 [2024-07-15 17:31:34.055164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:23.428 [2024-07-15 17:31:34.055177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:23.428 [2024-07-15 17:31:34.055189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:23.428 [2024-07-15 17:31:34.055200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:23.428 [2024-07-15 17:31:34.055212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:23.428 [2024-07-15 17:31:34.055223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:23.428 [2024-07-15 17:31:34.055235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:23.428 [2024-07-15 17:31:34.055246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:23.428 [2024-07-15 17:31:34.055307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:23.428 [2024-07-15 17:31:34.055321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.428 [2024-07-15 17:31:34.055346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:23.428 [2024-07-15 17:31:34.055357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:23.428 [2024-07-15 17:31:34.055381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:23.428 [2024-07-15 17:31:34.055393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.055426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:23.428 [2024-07-15 17:31:34.055438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:25:23.428 [2024-07-15 17:31:34.055450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.089637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.089724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.428 [2024-07-15 17:31:34.089748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.111 ms 00:25:23.428 [2024-07-15 17:31:34.089785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.089988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.090007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:23.428 [2024-07-15 17:31:34.090056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:23.428 [2024-07-15 17:31:34.090069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.109053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.109129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.428 [2024-07-15 17:31:34.109151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.811 ms 00:25:23.428 [2024-07-15 17:31:34.109163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.109255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.109281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.428 [2024-07-15 17:31:34.109313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.428 [2024-07-15 17:31:34.109325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.110360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.110423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.428 [2024-07-15 17:31:34.110446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:25:23.428 [2024-07-15 17:31:34.110459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.110666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.110688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.428 [2024-07-15 17:31:34.110707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:23.428 [2024-07-15 17:31:34.110724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.428 [2024-07-15 17:31:34.122075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.428 [2024-07-15 17:31:34.122125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.428 [2024-07-15 17:31:34.122164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.320 ms 00:25:23.428 [2024-07-15 17:31:34.122185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.126358] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:23.429 [2024-07-15 17:31:34.126422] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:23.429 [2024-07-15 17:31:34.126443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.126457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:23.429 [2024-07-15 17:31:34.126470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:25:23.429 [2024-07-15 17:31:34.126481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.143562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.143627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:23.429 [2024-07-15 17:31:34.143652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.034 ms 00:25:23.429 [2024-07-15 17:31:34.143681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.146030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.146071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:23.429 [2024-07-15 17:31:34.146093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:25:23.429 [2024-07-15 17:31:34.146106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.147876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.147916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:23.429 [2024-07-15 17:31:34.147932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:25:23.429 [2024-07-15 17:31:34.147944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.148525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.148552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:23.429 [2024-07-15 17:31:34.148572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:25:23.429 [2024-07-15 17:31:34.148600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.180842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.180965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:23.429 [2024-07-15 17:31:34.181001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.211 ms 00:25:23.429 [2024-07-15 17:31:34.181013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.189568] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:23.429 [2024-07-15 17:31:34.193993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.194045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:23.429 [2024-07-15 17:31:34.194062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.910 ms 00:25:23.429 [2024-07-15 17:31:34.194102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.194250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.194279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:23.429 [2024-07-15 17:31:34.194301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:23.429 [2024-07-15 17:31:34.194314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.197209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.197245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:23.429 [2024-07-15 17:31:34.197261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:25:23.429 [2024-07-15 17:31:34.197284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.197334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.197350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:23.429 [2024-07-15 17:31:34.197418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:23.429 [2024-07-15 17:31:34.197431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.197484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:23.429 [2024-07-15 17:31:34.197508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.197522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:23.429 [2024-07-15 17:31:34.197535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:23.429 [2024-07-15 17:31:34.197546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.202815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.202859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:23.429 [2024-07-15 17:31:34.202876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:25:23.429 [2024-07-15 17:31:34.202890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.203025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.429 [2024-07-15 17:31:34.203053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:23.429 [2024-07-15 17:31:34.203067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:23.429 [2024-07-15 17:31:34.203079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.429 [2024-07-15 17:31:34.211057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 182.267 ms, result 0 00:26:02.753  Copying: 888/1048576 [kB] (888 kBps) Copying: 4300/1048576 [kB] (3412 kBps) Copying: 24/1024 [MB] (19 MBps) Copying: 53/1024 [MB] (29 MBps) Copying: 82/1024 [MB] (29 MBps) Copying: 111/1024 [MB] (29 MBps) Copying: 139/1024 [MB] (27 MBps) Copying: 168/1024 [MB] (28 MBps) Copying: 195/1024 [MB] (27 MBps) Copying: 222/1024 [MB] (26 MBps) Copying: 249/1024 [MB] (27 MBps) Copying: 277/1024 [MB] (27 MBps) Copying: 303/1024 [MB] (26 MBps) Copying: 329/1024 [MB] (26 MBps) Copying: 356/1024 [MB] (26 MBps) Copying: 384/1024 [MB] (27 MBps) Copying: 411/1024 [MB] (27 MBps) Copying: 437/1024 [MB] (26 MBps) Copying: 464/1024 [MB] (26 MBps) Copying: 492/1024 [MB] (28 MBps) Copying: 520/1024 [MB] (28 MBps) Copying: 548/1024 [MB] (28 MBps) Copying: 576/1024 [MB] (27 MBps) Copying: 604/1024 [MB] (27 MBps) Copying: 632/1024 [MB] (28 MBps) Copying: 660/1024 [MB] (28 MBps) Copying: 686/1024 [MB] (26 MBps) Copying: 713/1024 [MB] (26 MBps) Copying: 740/1024 [MB] (27 MBps) Copying: 769/1024 [MB] (28 MBps) Copying: 797/1024 [MB] (28 MBps) Copying: 826/1024 [MB] (28 MBps) Copying: 853/1024 [MB] (26 MBps) Copying: 881/1024 [MB] (28 MBps) Copying: 910/1024 [MB] (28 MBps) Copying: 940/1024 [MB] (29 MBps) Copying: 967/1024 [MB] (27 MBps) Copying: 996/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 17:32:13.500753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.500902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.753 [2024-07-15 17:32:13.500951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:02.753 [2024-07-15 17:32:13.500992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.501055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.753 [2024-07-15 17:32:13.502955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.503006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.753 [2024-07-15 17:32:13.503030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:26:02.753 [2024-07-15 17:32:13.503068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.503949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.504007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.753 [2024-07-15 17:32:13.504125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:26:02.753 [2024-07-15 17:32:13.504158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.517107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.517173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.753 [2024-07-15 17:32:13.517197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.912 ms 00:26:02.753 [2024-07-15 17:32:13.517213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.525707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.525752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:02.753 [2024-07-15 17:32:13.525790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.444 ms 00:26:02.753 [2024-07-15 17:32:13.525806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.527629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.527675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.753 [2024-07-15 17:32:13.527694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:26:02.753 [2024-07-15 17:32:13.527708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.531267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.531320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.753 [2024-07-15 17:32:13.531341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.512 ms 00:26:02.753 [2024-07-15 17:32:13.531375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.753 [2024-07-15 17:32:13.534848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.753 [2024-07-15 17:32:13.534900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.754 [2024-07-15 17:32:13.534921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:26:02.754 [2024-07-15 17:32:13.534937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.754 [2024-07-15 17:32:13.537019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.754 [2024-07-15 17:32:13.537063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:02.754 [2024-07-15 17:32:13.537082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:26:02.754 [2024-07-15 17:32:13.537096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.754 [2024-07-15 17:32:13.538551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.754 [2024-07-15 17:32:13.538593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:02.754 [2024-07-15 17:32:13.538611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:26:02.754 [2024-07-15 17:32:13.538625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.754 [2024-07-15 17:32:13.539846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.754 [2024-07-15 17:32:13.539883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:02.754 [2024-07-15 17:32:13.539899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:26:02.754 [2024-07-15 17:32:13.539913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.754 [2024-07-15 17:32:13.541110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.754 [2024-07-15 17:32:13.541160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:02.754 [2024-07-15 17:32:13.541178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:26:02.754 [2024-07-15 17:32:13.541212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.754 [2024-07-15 17:32:13.541260] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:02.754 [2024-07-15 17:32:13.541295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:02.754 [2024-07-15 17:32:13.541314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:02.754 [2024-07-15 17:32:13.541331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.541994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:02.754 [2024-07-15 17:32:13.542286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:02.755 [2024-07-15 17:32:13.542926] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:02.755 [2024-07-15 17:32:13.542942] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ad4aa91-20f1-4a5d-9054-892030a0a588 00:26:02.755 [2024-07-15 17:32:13.542958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:02.755 [2024-07-15 17:32:13.542973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 139456 00:26:02.755 [2024-07-15 17:32:13.542987] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 137472 00:26:02.755 [2024-07-15 17:32:13.543003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0144 00:26:02.755 [2024-07-15 17:32:13.543018] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:02.755 [2024-07-15 17:32:13.543033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:02.755 [2024-07-15 17:32:13.543047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:02.755 [2024-07-15 17:32:13.543060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:02.755 [2024-07-15 17:32:13.543074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:02.755 [2024-07-15 17:32:13.543088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.755 [2024-07-15 17:32:13.543122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:02.755 [2024-07-15 17:32:13.543143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:26:02.755 [2024-07-15 17:32:13.543157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.546194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.755 [2024-07-15 17:32:13.546244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:02.755 [2024-07-15 17:32:13.546274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:26:02.755 [2024-07-15 17:32:13.546290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.546512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.755 [2024-07-15 17:32:13.546537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:02.755 [2024-07-15 17:32:13.546554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:26:02.755 [2024-07-15 17:32:13.546569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.556732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.556799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.755 [2024-07-15 17:32:13.556821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.556854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.556972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.556992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.755 [2024-07-15 17:32:13.557013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.557041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.557172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.557197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.755 [2024-07-15 17:32:13.557214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.557228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.557257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.557301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.755 [2024-07-15 17:32:13.557318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.557333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.581804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.581914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.755 [2024-07-15 17:32:13.581940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.581956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.596810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.755 [2024-07-15 17:32:13.596914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.755 [2024-07-15 17:32:13.596939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.755 [2024-07-15 17:32:13.596955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.755 [2024-07-15 17:32:13.597065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.756 [2024-07-15 17:32:13.597103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.597176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.756 [2024-07-15 17:32:13.597238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.597438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.756 [2024-07-15 17:32:13.597487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.597571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:02.756 [2024-07-15 17:32:13.597630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.597727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.756 [2024-07-15 17:32:13.597764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.597858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.756 [2024-07-15 17:32:13.597879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.756 [2024-07-15 17:32:13.597903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.756 [2024-07-15 17:32:13.597918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.756 [2024-07-15 17:32:13.598131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 97.337 ms, result 0 00:26:03.321 00:26:03.321 00:26:03.321 17:32:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:05.902 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:05.902 17:32:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:05.902 [2024-07-15 17:32:16.325999] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:26:05.902 [2024-07-15 17:32:16.326247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97257 ] 00:26:05.902 [2024-07-15 17:32:16.482218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:05.902 [2024-07-15 17:32:16.504263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.902 [2024-07-15 17:32:16.637006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.161 [2024-07-15 17:32:16.794558] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:06.161 [2024-07-15 17:32:16.794670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:06.161 [2024-07-15 17:32:16.958976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.959075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:06.161 [2024-07-15 17:32:16.959112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:06.161 [2024-07-15 17:32:16.959139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.959244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.959266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:06.161 [2024-07-15 17:32:16.959284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:06.161 [2024-07-15 17:32:16.959296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.959329] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:06.161 [2024-07-15 17:32:16.959750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:06.161 [2024-07-15 17:32:16.959779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.959792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:06.161 [2024-07-15 17:32:16.959804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:26:06.161 [2024-07-15 17:32:16.959826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.962381] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:06.161 [2024-07-15 17:32:16.965856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.965910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:06.161 [2024-07-15 17:32:16.965928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.477 ms 00:26:06.161 [2024-07-15 17:32:16.965956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.966036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.966057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:06.161 [2024-07-15 17:32:16.966070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:06.161 [2024-07-15 17:32:16.966082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.977954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.978047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:06.161 [2024-07-15 17:32:16.978069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:26:06.161 [2024-07-15 17:32:16.978083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.978231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.978258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:06.161 [2024-07-15 17:32:16.978288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:06.161 [2024-07-15 17:32:16.978300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.978465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.978489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:06.161 [2024-07-15 17:32:16.978503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:06.161 [2024-07-15 17:32:16.978514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.978556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:06.161 [2024-07-15 17:32:16.981347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.981406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:06.161 [2024-07-15 17:32:16.981422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:26:06.161 [2024-07-15 17:32:16.981434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.981495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.981517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:06.161 [2024-07-15 17:32:16.981531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:06.161 [2024-07-15 17:32:16.981542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.981572] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:06.161 [2024-07-15 17:32:16.981606] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:06.161 [2024-07-15 17:32:16.981654] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:06.161 [2024-07-15 17:32:16.981697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:06.161 [2024-07-15 17:32:16.981811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:06.161 [2024-07-15 17:32:16.981827] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:06.161 [2024-07-15 17:32:16.981842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:06.161 [2024-07-15 17:32:16.981858] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:06.161 [2024-07-15 17:32:16.981872] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:06.161 [2024-07-15 17:32:16.981885] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:06.161 [2024-07-15 17:32:16.981897] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:06.161 [2024-07-15 17:32:16.981908] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:06.161 [2024-07-15 17:32:16.981931] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:06.161 [2024-07-15 17:32:16.981948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.981963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:06.161 [2024-07-15 17:32:16.981976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:26:06.161 [2024-07-15 17:32:16.981988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.982087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-15 17:32:16.982101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:06.161 [2024-07-15 17:32:16.982112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:06.161 [2024-07-15 17:32:16.982124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-15 17:32:16.982269] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:06.161 [2024-07-15 17:32:16.982291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:06.161 [2024-07-15 17:32:16.982310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:06.161 [2024-07-15 17:32:16.982322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.161 [2024-07-15 17:32:16.982347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:06.161 [2024-07-15 17:32:16.982380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:06.161 [2024-07-15 17:32:16.982394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:06.161 [2024-07-15 17:32:16.982407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:06.161 [2024-07-15 17:32:16.982423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:06.161 [2024-07-15 17:32:16.982435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:06.161 [2024-07-15 17:32:16.982447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:06.161 [2024-07-15 17:32:16.982458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:06.161 [2024-07-15 17:32:16.982467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:06.161 [2024-07-15 17:32:16.982478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:06.162 [2024-07-15 17:32:16.982488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:06.162 [2024-07-15 17:32:16.982513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:06.162 [2024-07-15 17:32:16.982535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:06.162 [2024-07-15 17:32:16.982573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:06.162 [2024-07-15 17:32:16.982609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:06.162 [2024-07-15 17:32:16.982649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:06.162 [2024-07-15 17:32:16.982680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:06.162 [2024-07-15 17:32:16.982711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:06.162 [2024-07-15 17:32:16.982732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:06.162 [2024-07-15 17:32:16.982743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:06.162 [2024-07-15 17:32:16.982753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:06.162 [2024-07-15 17:32:16.982763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:06.162 [2024-07-15 17:32:16.982773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:06.162 [2024-07-15 17:32:16.982783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:06.162 [2024-07-15 17:32:16.982809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:06.162 [2024-07-15 17:32:16.982819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982829] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:06.162 [2024-07-15 17:32:16.982840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:06.162 [2024-07-15 17:32:16.982851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:06.162 [2024-07-15 17:32:16.982884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:06.162 [2024-07-15 17:32:16.982895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:06.162 [2024-07-15 17:32:16.982906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:06.162 [2024-07-15 17:32:16.982917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:06.162 [2024-07-15 17:32:16.982928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:06.162 [2024-07-15 17:32:16.982939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:06.162 [2024-07-15 17:32:16.982952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:06.162 [2024-07-15 17:32:16.982967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.982980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:06.162 [2024-07-15 17:32:16.982995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:06.162 [2024-07-15 17:32:16.983008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:06.162 [2024-07-15 17:32:16.983019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:06.162 [2024-07-15 17:32:16.983031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:06.162 [2024-07-15 17:32:16.983042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:06.162 [2024-07-15 17:32:16.983054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:06.162 [2024-07-15 17:32:16.983065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:06.162 [2024-07-15 17:32:16.983078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:06.162 [2024-07-15 17:32:16.983089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:06.162 [2024-07-15 17:32:16.983145] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:06.162 [2024-07-15 17:32:16.983158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:06.162 [2024-07-15 17:32:16.983186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:06.162 [2024-07-15 17:32:16.983198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:06.162 [2024-07-15 17:32:16.983210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:06.162 [2024-07-15 17:32:16.983223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.162 [2024-07-15 17:32:16.983239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:06.162 [2024-07-15 17:32:16.983251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:26:06.162 [2024-07-15 17:32:16.983262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.162 [2024-07-15 17:32:17.013060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.162 [2024-07-15 17:32:17.013164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:06.162 [2024-07-15 17:32:17.013211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.721 ms 00:26:06.162 [2024-07-15 17:32:17.013240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.162 [2024-07-15 17:32:17.013473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.162 [2024-07-15 17:32:17.013498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:06.162 [2024-07-15 17:32:17.013526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:26:06.162 [2024-07-15 17:32:17.013554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.030611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.030699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:06.421 [2024-07-15 17:32:17.030723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.917 ms 00:26:06.421 [2024-07-15 17:32:17.030736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.030836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.030861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.421 [2024-07-15 17:32:17.030880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:06.421 [2024-07-15 17:32:17.030893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.031758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.031795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.421 [2024-07-15 17:32:17.031810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:26:06.421 [2024-07-15 17:32:17.031822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.032019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.032050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.421 [2024-07-15 17:32:17.032069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:26:06.421 [2024-07-15 17:32:17.032081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.042304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.042388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.421 [2024-07-15 17:32:17.042412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.190 ms 00:26:06.421 [2024-07-15 17:32:17.042425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.046309] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:06.421 [2024-07-15 17:32:17.046377] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:06.421 [2024-07-15 17:32:17.046399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.046412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:06.421 [2024-07-15 17:32:17.046426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:26:06.421 [2024-07-15 17:32:17.046438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.062446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.062535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:06.421 [2024-07-15 17:32:17.062584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.955 ms 00:26:06.421 [2024-07-15 17:32:17.062597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.065713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.065759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:06.421 [2024-07-15 17:32:17.065779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:26:06.421 [2024-07-15 17:32:17.065791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.067517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.067555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:06.421 [2024-07-15 17:32:17.067582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:26:06.421 [2024-07-15 17:32:17.067595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.068082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.068109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:06.421 [2024-07-15 17:32:17.068129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:26:06.421 [2024-07-15 17:32:17.068141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.097414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.097519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:06.421 [2024-07-15 17:32:17.097550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.245 ms 00:26:06.421 [2024-07-15 17:32:17.097564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.106189] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:06.421 [2024-07-15 17:32:17.111712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.111768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:06.421 [2024-07-15 17:32:17.111790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.049 ms 00:26:06.421 [2024-07-15 17:32:17.111804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.111964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.111986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:06.421 [2024-07-15 17:32:17.112006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:06.421 [2024-07-15 17:32:17.112018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.421 [2024-07-15 17:32:17.113512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.421 [2024-07-15 17:32:17.113553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:06.422 [2024-07-15 17:32:17.113569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:26:06.422 [2024-07-15 17:32:17.113581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.422 [2024-07-15 17:32:17.113625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.422 [2024-07-15 17:32:17.113641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:06.422 [2024-07-15 17:32:17.113654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:06.422 [2024-07-15 17:32:17.113666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.422 [2024-07-15 17:32:17.113714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:06.422 [2024-07-15 17:32:17.113739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.422 [2024-07-15 17:32:17.113763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:06.422 [2024-07-15 17:32:17.113775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:06.422 [2024-07-15 17:32:17.113787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.422 [2024-07-15 17:32:17.118724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.422 [2024-07-15 17:32:17.118770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:06.422 [2024-07-15 17:32:17.118802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:26:06.422 [2024-07-15 17:32:17.118814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.422 [2024-07-15 17:32:17.118908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.422 [2024-07-15 17:32:17.118934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:06.422 [2024-07-15 17:32:17.118958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:06.422 [2024-07-15 17:32:17.118970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.422 [2024-07-15 17:32:17.120578] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.022 ms, result 0 00:26:47.251  Copying: 27/1024 [MB] (27 MBps) Copying: 52/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (23 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (26 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 204/1024 [MB] (25 MBps) Copying: 229/1024 [MB] (25 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 281/1024 [MB] (26 MBps) Copying: 307/1024 [MB] (25 MBps) Copying: 332/1024 [MB] (25 MBps) Copying: 358/1024 [MB] (26 MBps) Copying: 385/1024 [MB] (26 MBps) Copying: 410/1024 [MB] (25 MBps) Copying: 436/1024 [MB] (26 MBps) Copying: 463/1024 [MB] (26 MBps) Copying: 489/1024 [MB] (26 MBps) Copying: 516/1024 [MB] (26 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 569/1024 [MB] (26 MBps) Copying: 595/1024 [MB] (26 MBps) Copying: 620/1024 [MB] (24 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 671/1024 [MB] (25 MBps) Copying: 694/1024 [MB] (23 MBps) Copying: 719/1024 [MB] (24 MBps) Copying: 743/1024 [MB] (24 MBps) Copying: 768/1024 [MB] (24 MBps) Copying: 794/1024 [MB] (26 MBps) Copying: 817/1024 [MB] (23 MBps) Copying: 840/1024 [MB] (22 MBps) Copying: 863/1024 [MB] (23 MBps) Copying: 887/1024 [MB] (23 MBps) Copying: 912/1024 [MB] (24 MBps) Copying: 937/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (25 MBps) Copying: 987/1024 [MB] (25 MBps) Copying: 1011/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 17:32:57.992800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:57.992936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.251 [2024-07-15 17:32:57.992966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.251 [2024-07-15 17:32:57.992983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:57.993040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.251 [2024-07-15 17:32:57.994339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:57.994384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.251 [2024-07-15 17:32:57.994404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:26:47.251 [2024-07-15 17:32:57.994419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:57.994749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:57.994797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.251 [2024-07-15 17:32:57.994814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:26:47.251 [2024-07-15 17:32:57.994843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.000553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.000597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.251 [2024-07-15 17:32:58.000615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.681 ms 00:26:47.251 [2024-07-15 17:32:58.000631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.009133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.009184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:47.251 [2024-07-15 17:32:58.009210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.469 ms 00:26:47.251 [2024-07-15 17:32:58.009225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.011529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.011573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.251 [2024-07-15 17:32:58.011591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:26:47.251 [2024-07-15 17:32:58.011605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.015879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.015927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:47.251 [2024-07-15 17:32:58.015947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:26:47.251 [2024-07-15 17:32:58.015980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.019918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.019976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:47.251 [2024-07-15 17:32:58.019996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:26:47.251 [2024-07-15 17:32:58.020017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.022049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.022090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:47.251 [2024-07-15 17:32:58.022108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:26:47.251 [2024-07-15 17:32:58.022122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.023820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.023858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:47.251 [2024-07-15 17:32:58.023875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:26:47.251 [2024-07-15 17:32:58.023889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.025394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.025430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:47.251 [2024-07-15 17:32:58.025447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:26:47.251 [2024-07-15 17:32:58.025468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.026814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.251 [2024-07-15 17:32:58.026852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:47.251 [2024-07-15 17:32:58.026889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:26:47.251 [2024-07-15 17:32:58.026903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.251 [2024-07-15 17:32:58.026949] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:47.251 [2024-07-15 17:32:58.026977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:47.251 [2024-07-15 17:32:58.027012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:47.251 [2024-07-15 17:32:58.027029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:47.251 [2024-07-15 17:32:58.027699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.027989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:47.252 [2024-07-15 17:32:58.028611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:47.252 [2024-07-15 17:32:58.028625] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ad4aa91-20f1-4a5d-9054-892030a0a588 00:26:47.252 [2024-07-15 17:32:58.028641] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:47.252 [2024-07-15 17:32:58.028655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:47.252 [2024-07-15 17:32:58.028670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:47.252 [2024-07-15 17:32:58.028684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:47.252 [2024-07-15 17:32:58.028707] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:47.252 [2024-07-15 17:32:58.028722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:47.252 [2024-07-15 17:32:58.028736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:47.252 [2024-07-15 17:32:58.028749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:47.252 [2024-07-15 17:32:58.028762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:47.252 [2024-07-15 17:32:58.028777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.252 [2024-07-15 17:32:58.028799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:47.252 [2024-07-15 17:32:58.028815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:26:47.252 [2024-07-15 17:32:58.028831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.032009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.252 [2024-07-15 17:32:58.032052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:47.252 [2024-07-15 17:32:58.032077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:26:47.252 [2024-07-15 17:32:58.032092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.032279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.252 [2024-07-15 17:32:58.032298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:47.252 [2024-07-15 17:32:58.032327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:26:47.252 [2024-07-15 17:32:58.032342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.042762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.252 [2024-07-15 17:32:58.042844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.252 [2024-07-15 17:32:58.042884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.252 [2024-07-15 17:32:58.042913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.043027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.252 [2024-07-15 17:32:58.043060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.252 [2024-07-15 17:32:58.043076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.252 [2024-07-15 17:32:58.043102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.043233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.252 [2024-07-15 17:32:58.043276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.252 [2024-07-15 17:32:58.043308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.252 [2024-07-15 17:32:58.043324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.043354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.252 [2024-07-15 17:32:58.043405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.252 [2024-07-15 17:32:58.043421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.252 [2024-07-15 17:32:58.043435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.252 [2024-07-15 17:32:58.067896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.252 [2024-07-15 17:32:58.067992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.252 [2024-07-15 17:32:58.068015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.068031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.083198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.083305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.253 [2024-07-15 17:32:58.083331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.083348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.083508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.083532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.253 [2024-07-15 17:32:58.083548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.083580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.083660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.083681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.253 [2024-07-15 17:32:58.083697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.083712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.083834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.083856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.253 [2024-07-15 17:32:58.083873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.083888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.083957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.083988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:47.253 [2024-07-15 17:32:58.084006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.084021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.084084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.084104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.253 [2024-07-15 17:32:58.084119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.084133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.084218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.253 [2024-07-15 17:32:58.084248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.253 [2024-07-15 17:32:58.084265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.253 [2024-07-15 17:32:58.084280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.253 [2024-07-15 17:32:58.084533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.662 ms, result 0 00:26:47.818 00:26:47.818 00:26:47.818 17:32:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:50.348 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:50.348 17:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:50.348 17:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:50.348 17:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:50.348 17:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:50.348 17:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 95433 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 95433 ']' 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 95433 00:26:50.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (95433) - No such process 00:26:50.348 Process with pid 95433 is not found 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 95433 is not found' 00:26:50.348 17:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:50.606 Remove shared memory files 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:50.606 ************************************ 00:26:50.606 END TEST ftl_dirty_shutdown 00:26:50.606 ************************************ 00:26:50.606 00:26:50.606 real 3m44.971s 00:26:50.606 user 4m15.192s 00:26:50.606 sys 0m38.636s 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.606 17:33:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:50.606 17:33:01 ftl -- common/autotest_common.sh@1142 -- # return 0 00:26:50.606 17:33:01 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:50.606 17:33:01 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:50.606 17:33:01 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.606 17:33:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:50.864 ************************************ 00:26:50.864 START TEST ftl_upgrade_shutdown 00:26:50.864 ************************************ 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:50.864 * Looking for test storage... 00:26:50.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=97762 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 97762 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 97762 ']' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.864 17:33:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:50.864 [2024-07-15 17:33:01.694719] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:26:50.864 [2024-07-15 17:33:01.694919] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97762 ] 00:26:51.121 [2024-07-15 17:33:01.846885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:51.121 [2024-07-15 17:33:01.861928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.379 [2024-07-15 17:33:01.992409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:51.951 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:52.208 17:33:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:52.465 { 00:26:52.465 "name": "basen1", 00:26:52.465 "aliases": [ 00:26:52.465 "35f4c490-1661-45c4-8a60-18ef946f5a95" 00:26:52.465 ], 00:26:52.465 "product_name": "NVMe disk", 00:26:52.465 "block_size": 4096, 00:26:52.465 "num_blocks": 1310720, 00:26:52.465 "uuid": "35f4c490-1661-45c4-8a60-18ef946f5a95", 00:26:52.465 "assigned_rate_limits": { 00:26:52.465 "rw_ios_per_sec": 0, 00:26:52.465 "rw_mbytes_per_sec": 0, 00:26:52.465 "r_mbytes_per_sec": 0, 00:26:52.465 "w_mbytes_per_sec": 0 00:26:52.465 }, 00:26:52.465 "claimed": true, 00:26:52.465 "claim_type": "read_many_write_one", 00:26:52.465 "zoned": false, 00:26:52.465 "supported_io_types": { 00:26:52.465 "read": true, 00:26:52.465 "write": true, 00:26:52.465 "unmap": true, 00:26:52.465 "flush": true, 00:26:52.465 "reset": true, 00:26:52.465 "nvme_admin": true, 00:26:52.465 "nvme_io": true, 00:26:52.465 "nvme_io_md": false, 00:26:52.465 "write_zeroes": true, 00:26:52.465 "zcopy": false, 00:26:52.465 "get_zone_info": false, 00:26:52.465 "zone_management": false, 00:26:52.465 "zone_append": false, 00:26:52.465 "compare": true, 00:26:52.465 "compare_and_write": false, 00:26:52.465 "abort": true, 00:26:52.465 "seek_hole": false, 00:26:52.465 "seek_data": false, 00:26:52.465 "copy": true, 00:26:52.465 "nvme_iov_md": false 00:26:52.465 }, 00:26:52.465 "driver_specific": { 00:26:52.465 "nvme": [ 00:26:52.465 { 00:26:52.465 "pci_address": "0000:00:11.0", 00:26:52.465 "trid": { 00:26:52.465 "trtype": "PCIe", 00:26:52.465 "traddr": "0000:00:11.0" 00:26:52.465 }, 00:26:52.465 "ctrlr_data": { 00:26:52.465 "cntlid": 0, 00:26:52.465 "vendor_id": "0x1b36", 00:26:52.465 "model_number": "QEMU NVMe Ctrl", 00:26:52.465 "serial_number": "12341", 00:26:52.465 "firmware_revision": "8.0.0", 00:26:52.465 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:52.465 "oacs": { 00:26:52.465 "security": 0, 00:26:52.465 "format": 1, 00:26:52.465 "firmware": 0, 00:26:52.465 "ns_manage": 1 00:26:52.465 }, 00:26:52.465 "multi_ctrlr": false, 00:26:52.465 "ana_reporting": false 00:26:52.465 }, 00:26:52.465 "vs": { 00:26:52.465 "nvme_version": "1.4" 00:26:52.465 }, 00:26:52.465 "ns_data": { 00:26:52.465 "id": 1, 00:26:52.465 "can_share": false 00:26:52.465 } 00:26:52.465 } 00:26:52.465 ], 00:26:52.465 "mp_policy": "active_passive" 00:26:52.465 } 00:26:52.465 } 00:26:52.465 ]' 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:52.465 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:52.723 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d3944113-c1d4-4fb1-92e6-173107dad589 00:26:52.723 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:52.723 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3944113-c1d4-4fb1-92e6-173107dad589 00:26:52.981 17:33:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=cb6e3d0a-2770-48b0-9a7c-2f300d00a904 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u cb6e3d0a-2770-48b0-9a7c-2f300d00a904 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=eedd8948-bd6f-463b-bd25-5cfaec4e9172 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z eedd8948-bd6f-463b-bd25-5cfaec4e9172 ]] 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 eedd8948-bd6f-463b-bd25-5cfaec4e9172 5120 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=eedd8948-bd6f-463b-bd25-5cfaec4e9172 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size eedd8948-bd6f-463b-bd25-5cfaec4e9172 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=eedd8948-bd6f-463b-bd25-5cfaec4e9172 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:53.548 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eedd8948-bd6f-463b-bd25-5cfaec4e9172 00:26:53.806 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:53.806 { 00:26:53.806 "name": "eedd8948-bd6f-463b-bd25-5cfaec4e9172", 00:26:53.806 "aliases": [ 00:26:53.806 "lvs/basen1p0" 00:26:53.806 ], 00:26:53.806 "product_name": "Logical Volume", 00:26:53.806 "block_size": 4096, 00:26:53.806 "num_blocks": 5242880, 00:26:53.806 "uuid": "eedd8948-bd6f-463b-bd25-5cfaec4e9172", 00:26:53.806 "assigned_rate_limits": { 00:26:53.806 "rw_ios_per_sec": 0, 00:26:53.806 "rw_mbytes_per_sec": 0, 00:26:53.806 "r_mbytes_per_sec": 0, 00:26:53.806 "w_mbytes_per_sec": 0 00:26:53.806 }, 00:26:53.806 "claimed": false, 00:26:53.806 "zoned": false, 00:26:53.806 "supported_io_types": { 00:26:53.806 "read": true, 00:26:53.806 "write": true, 00:26:53.806 "unmap": true, 00:26:53.806 "flush": false, 00:26:53.806 "reset": true, 00:26:53.806 "nvme_admin": false, 00:26:53.806 "nvme_io": false, 00:26:53.806 "nvme_io_md": false, 00:26:53.806 "write_zeroes": true, 00:26:53.806 "zcopy": false, 00:26:53.806 "get_zone_info": false, 00:26:53.806 "zone_management": false, 00:26:53.806 "zone_append": false, 00:26:53.806 "compare": false, 00:26:53.806 "compare_and_write": false, 00:26:53.806 "abort": false, 00:26:53.806 "seek_hole": true, 00:26:53.806 "seek_data": true, 00:26:53.806 "copy": false, 00:26:53.806 "nvme_iov_md": false 00:26:53.806 }, 00:26:53.806 "driver_specific": { 00:26:53.806 "lvol": { 00:26:53.806 "lvol_store_uuid": "cb6e3d0a-2770-48b0-9a7c-2f300d00a904", 00:26:53.806 "base_bdev": "basen1", 00:26:53.806 "thin_provision": true, 00:26:53.806 "num_allocated_clusters": 0, 00:26:53.806 "snapshot": false, 00:26:53.806 "clone": false, 00:26:53.806 "esnap_clone": false 00:26:53.806 } 00:26:53.807 } 00:26:53.807 } 00:26:53.807 ]' 00:26:53.807 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:53.807 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:53.807 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:54.065 17:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:54.323 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:54.323 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:54.323 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:54.581 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:54.581 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:54.581 17:33:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d eedd8948-bd6f-463b-bd25-5cfaec4e9172 -c cachen1p0 --l2p_dram_limit 2 00:26:54.840 [2024-07-15 17:33:05.552732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.552830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:54.840 [2024-07-15 17:33:05.552856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:54.840 [2024-07-15 17:33:05.552872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.552967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.552991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:54.840 [2024-07-15 17:33:05.553005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:26:54.840 [2024-07-15 17:33:05.553025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.553056] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:54.840 [2024-07-15 17:33:05.553490] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:54.840 [2024-07-15 17:33:05.553526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.553544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:54.840 [2024-07-15 17:33:05.553559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.478 ms 00:26:54.840 [2024-07-15 17:33:05.553574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.553811] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 652d2bfd-334e-4b2e-972b-8a875aca5eec 00:26:54.840 [2024-07-15 17:33:05.556432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.556475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:54.840 [2024-07-15 17:33:05.556497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:54.840 [2024-07-15 17:33:05.556511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.570784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.570876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:54.840 [2024-07-15 17:33:05.570902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.168 ms 00:26:54.840 [2024-07-15 17:33:05.570915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.571034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.571054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:54.840 [2024-07-15 17:33:05.571072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:54.840 [2024-07-15 17:33:05.571084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.571233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.571259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:54.840 [2024-07-15 17:33:05.571278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:54.840 [2024-07-15 17:33:05.571290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.571343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:54.840 [2024-07-15 17:33:05.574314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.574369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:54.840 [2024-07-15 17:33:05.574386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.997 ms 00:26:54.840 [2024-07-15 17:33:05.574402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.574448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.574467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:54.840 [2024-07-15 17:33:05.574480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:54.840 [2024-07-15 17:33:05.574498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.574525] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:54.840 [2024-07-15 17:33:05.574709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:54.840 [2024-07-15 17:33:05.574729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:54.840 [2024-07-15 17:33:05.574750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:54.840 [2024-07-15 17:33:05.574766] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:54.840 [2024-07-15 17:33:05.574783] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:54.840 [2024-07-15 17:33:05.574813] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:54.840 [2024-07-15 17:33:05.574837] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:54.840 [2024-07-15 17:33:05.574849] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:54.840 [2024-07-15 17:33:05.574863] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:54.840 [2024-07-15 17:33:05.574876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.574891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:54.840 [2024-07-15 17:33:05.574904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:26:54.840 [2024-07-15 17:33:05.574918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.575013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.840 [2024-07-15 17:33:05.575034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:54.840 [2024-07-15 17:33:05.575046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:26:54.840 [2024-07-15 17:33:05.575073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.840 [2024-07-15 17:33:05.575219] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:54.840 [2024-07-15 17:33:05.575270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:54.840 [2024-07-15 17:33:05.575286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:54.840 [2024-07-15 17:33:05.575300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:54.840 [2024-07-15 17:33:05.575330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:54.840 [2024-07-15 17:33:05.575355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:54.840 [2024-07-15 17:33:05.575380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:54.840 [2024-07-15 17:33:05.575394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:54.840 [2024-07-15 17:33:05.575419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:54.840 [2024-07-15 17:33:05.575429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:54.840 [2024-07-15 17:33:05.575456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:54.840 [2024-07-15 17:33:05.575469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:54.840 [2024-07-15 17:33:05.575493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:54.840 [2024-07-15 17:33:05.575504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.840 [2024-07-15 17:33:05.575518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:54.840 [2024-07-15 17:33:05.575529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:54.840 [2024-07-15 17:33:05.575546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:54.840 [2024-07-15 17:33:05.575557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:54.840 [2024-07-15 17:33:05.575571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:54.840 [2024-07-15 17:33:05.575582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:54.840 [2024-07-15 17:33:05.575596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:54.840 [2024-07-15 17:33:05.575606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:54.840 [2024-07-15 17:33:05.575621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:54.840 [2024-07-15 17:33:05.575632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:54.840 [2024-07-15 17:33:05.575649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:54.840 [2024-07-15 17:33:05.575659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:54.841 [2024-07-15 17:33:05.575672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:54.841 [2024-07-15 17:33:05.575683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:54.841 [2024-07-15 17:33:05.575696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:54.841 [2024-07-15 17:33:05.575721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:54.841 [2024-07-15 17:33:05.575732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:54.841 [2024-07-15 17:33:05.575756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:54.841 [2024-07-15 17:33:05.575794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:54.841 [2024-07-15 17:33:05.575804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575817] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:54.841 [2024-07-15 17:33:05.575829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:54.841 [2024-07-15 17:33:05.575846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:54.841 [2024-07-15 17:33:05.575857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:54.841 [2024-07-15 17:33:05.575875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:54.841 [2024-07-15 17:33:05.575887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:54.841 [2024-07-15 17:33:05.575901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:54.841 [2024-07-15 17:33:05.575912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:54.841 [2024-07-15 17:33:05.575926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:54.841 [2024-07-15 17:33:05.575937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:54.841 [2024-07-15 17:33:05.575958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:54.841 [2024-07-15 17:33:05.575974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.575990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:54.841 [2024-07-15 17:33:05.576002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:54.841 [2024-07-15 17:33:05.576053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:54.841 [2024-07-15 17:33:05.576065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:54.841 [2024-07-15 17:33:05.576082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:54.841 [2024-07-15 17:33:05.576094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:54.841 [2024-07-15 17:33:05.576188] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:54.841 [2024-07-15 17:33:05.576201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:54.841 [2024-07-15 17:33:05.576229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:54.841 [2024-07-15 17:33:05.576244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:54.841 [2024-07-15 17:33:05.576256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:54.841 [2024-07-15 17:33:05.576272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.841 [2024-07-15 17:33:05.576284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:54.841 [2024-07-15 17:33:05.576304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:26:54.841 [2024-07-15 17:33:05.576316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.841 [2024-07-15 17:33:05.576398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:54.841 [2024-07-15 17:33:05.576416] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:57.367 [2024-07-15 17:33:08.121982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.122087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:57.367 [2024-07-15 17:33:08.122114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2545.577 ms 00:26:57.367 [2024-07-15 17:33:08.122128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.141555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.141644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:57.367 [2024-07-15 17:33:08.141670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.274 ms 00:26:57.367 [2024-07-15 17:33:08.141689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.141787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.141804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:57.367 [2024-07-15 17:33:08.141826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:57.367 [2024-07-15 17:33:08.141838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.160435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.160512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:57.367 [2024-07-15 17:33:08.160537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.429 ms 00:26:57.367 [2024-07-15 17:33:08.160554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.160671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.160687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:57.367 [2024-07-15 17:33:08.160703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:57.367 [2024-07-15 17:33:08.160715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.161616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.161651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:57.367 [2024-07-15 17:33:08.161684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:26:57.367 [2024-07-15 17:33:08.161697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.161769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.161784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:57.367 [2024-07-15 17:33:08.161799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:57.367 [2024-07-15 17:33:08.161811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.175082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.175138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:57.367 [2024-07-15 17:33:08.175160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.237 ms 00:26:57.367 [2024-07-15 17:33:08.175173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.186607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:57.367 [2024-07-15 17:33:08.188453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.188522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:57.367 [2024-07-15 17:33:08.188541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.135 ms 00:26:57.367 [2024-07-15 17:33:08.188556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.215374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.215461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:57.367 [2024-07-15 17:33:08.215486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.748 ms 00:26:57.367 [2024-07-15 17:33:08.215510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.215676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.215706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:57.367 [2024-07-15 17:33:08.215724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:26:57.367 [2024-07-15 17:33:08.215742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.219090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.219140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:57.367 [2024-07-15 17:33:08.219162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.295 ms 00:26:57.367 [2024-07-15 17:33:08.219178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.222153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.222202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:57.367 [2024-07-15 17:33:08.222219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.928 ms 00:26:57.367 [2024-07-15 17:33:08.222234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.367 [2024-07-15 17:33:08.222704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.367 [2024-07-15 17:33:08.222741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:57.367 [2024-07-15 17:33:08.222757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:26:57.367 [2024-07-15 17:33:08.222777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.264498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.264593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:57.626 [2024-07-15 17:33:08.264624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.669 ms 00:26:57.626 [2024-07-15 17:33:08.264641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.270183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.270241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:57.626 [2024-07-15 17:33:08.270260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.472 ms 00:26:57.626 [2024-07-15 17:33:08.270277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.274100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.274147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:57.626 [2024-07-15 17:33:08.274163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.765 ms 00:26:57.626 [2024-07-15 17:33:08.274177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.278293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.278344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:57.626 [2024-07-15 17:33:08.278373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.071 ms 00:26:57.626 [2024-07-15 17:33:08.278394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.278448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.278481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:57.626 [2024-07-15 17:33:08.278495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:57.626 [2024-07-15 17:33:08.278510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.278633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.626 [2024-07-15 17:33:08.278666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:57.626 [2024-07-15 17:33:08.278681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:57.626 [2024-07-15 17:33:08.278700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.626 [2024-07-15 17:33:08.280172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2726.931 ms, result 0 00:26:57.626 { 00:26:57.626 "name": "ftl", 00:26:57.626 "uuid": "652d2bfd-334e-4b2e-972b-8a875aca5eec" 00:26:57.626 } 00:26:57.626 17:33:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:57.884 [2024-07-15 17:33:08.537860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.884 17:33:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:58.142 17:33:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:58.400 [2024-07-15 17:33:09.026493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:58.400 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:58.658 [2024-07-15 17:33:09.315101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:58.658 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:58.916 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:58.916 Fill FTL, iteration 1 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=97880 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 97880 /var/tmp/spdk.tgt.sock 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 97880 ']' 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:58.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:58.917 17:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:59.174 [2024-07-15 17:33:09.883289] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:26:59.174 [2024-07-15 17:33:09.884470] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97880 ] 00:26:59.432 [2024-07-15 17:33:10.040061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:59.432 [2024-07-15 17:33:10.062206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.432 [2024-07-15 17:33:10.164302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.997 17:33:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.997 17:33:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:59.997 17:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:00.564 ftln1 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 97880 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 97880 ']' 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 97880 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97880 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.564 killing process with pid 97880 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97880' 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 97880 00:27:00.564 17:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 97880 00:27:01.496 17:33:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:01.496 17:33:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:01.496 [2024-07-15 17:33:12.142271] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:01.496 [2024-07-15 17:33:12.142493] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97922 ] 00:27:01.496 [2024-07-15 17:33:12.288299] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.496 [2024-07-15 17:33:12.312053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.752 [2024-07-15 17:33:12.452084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.418  Copying: 208/1024 [MB] (208 MBps) Copying: 416/1024 [MB] (208 MBps) Copying: 628/1024 [MB] (212 MBps) Copying: 837/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:27:07.418 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:07.418 Calculate MD5 checksum, iteration 1 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:07.418 17:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:07.418 [2024-07-15 17:33:18.099236] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:07.418 [2024-07-15 17:33:18.099471] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97986 ] 00:27:07.418 [2024-07-15 17:33:18.266913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:07.676 [2024-07-15 17:33:18.288070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.676 [2024-07-15 17:33:18.413940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.547  Copying: 496/1024 [MB] (496 MBps) Copying: 988/1024 [MB] (492 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:27:10.547 00:27:10.547 17:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:10.547 17:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:13.107 Fill FTL, iteration 2 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6e985afb9e9b63d47e45a0bfb130bae2 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:13.107 17:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.107 [2024-07-15 17:33:23.485820] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:13.107 [2024-07-15 17:33:23.486052] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98042 ] 00:27:13.107 [2024-07-15 17:33:23.642568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.107 [2024-07-15 17:33:23.663909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.107 [2024-07-15 17:33:23.794489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.464  Copying: 207/1024 [MB] (207 MBps) Copying: 420/1024 [MB] (213 MBps) Copying: 635/1024 [MB] (215 MBps) Copying: 849/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:27:18.464 00:27:18.464 Calculate MD5 checksum, iteration 2 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.464 17:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.722 [2024-07-15 17:33:29.384352] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:18.722 [2024-07-15 17:33:29.384562] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98102 ] 00:27:18.722 [2024-07-15 17:33:29.530147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:18.722 [2024-07-15 17:33:29.554660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.980 [2024-07-15 17:33:29.654721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.516  Copying: 424/1024 [MB] (424 MBps) Copying: 911/1024 [MB] (487 MBps) Copying: 1024/1024 [MB] (average 452 MBps) 00:27:22.516 00:27:22.516 17:33:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:22.516 17:33:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a417e35825c5c96f10cff438a4dad7e2 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:25.042 [2024-07-15 17:33:35.671001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.042 [2024-07-15 17:33:35.671084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:25.042 [2024-07-15 17:33:35.671108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:25.042 [2024-07-15 17:33:35.671120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.042 [2024-07-15 17:33:35.671168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.042 [2024-07-15 17:33:35.671184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:25.042 [2024-07-15 17:33:35.671197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:25.042 [2024-07-15 17:33:35.671222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.042 [2024-07-15 17:33:35.671264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.042 [2024-07-15 17:33:35.671277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:25.042 [2024-07-15 17:33:35.671290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:25.042 [2024-07-15 17:33:35.671302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.042 [2024-07-15 17:33:35.671407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.380 ms, result 0 00:27:25.042 true 00:27:25.042 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:25.299 { 00:27:25.299 "name": "ftl", 00:27:25.299 "properties": [ 00:27:25.299 { 00:27:25.299 "name": "superblock_version", 00:27:25.299 "value": 5, 00:27:25.299 "read-only": true 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "name": "base_device", 00:27:25.299 "bands": [ 00:27:25.299 { 00:27:25.299 "id": 0, 00:27:25.299 "state": "FREE", 00:27:25.299 "validity": 0.0 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "id": 1, 00:27:25.299 "state": "FREE", 00:27:25.299 "validity": 0.0 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "id": 2, 00:27:25.299 "state": "FREE", 00:27:25.299 "validity": 0.0 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "id": 3, 00:27:25.299 "state": "FREE", 00:27:25.299 "validity": 0.0 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "id": 4, 00:27:25.299 "state": "FREE", 00:27:25.299 "validity": 0.0 00:27:25.299 }, 00:27:25.299 { 00:27:25.299 "id": 5, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 6, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 7, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 8, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 9, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 10, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 11, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 12, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 13, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 14, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 15, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 16, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 17, 00:27:25.300 "state": "FREE", 00:27:25.300 "validity": 0.0 00:27:25.300 } 00:27:25.300 ], 00:27:25.300 "read-only": true 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "name": "cache_device", 00:27:25.300 "type": "bdev", 00:27:25.300 "chunks": [ 00:27:25.300 { 00:27:25.300 "id": 0, 00:27:25.300 "state": "INACTIVE", 00:27:25.300 "utilization": 0.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 1, 00:27:25.300 "state": "CLOSED", 00:27:25.300 "utilization": 1.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 2, 00:27:25.300 "state": "CLOSED", 00:27:25.300 "utilization": 1.0 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 3, 00:27:25.300 "state": "OPEN", 00:27:25.300 "utilization": 0.001953125 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "id": 4, 00:27:25.300 "state": "OPEN", 00:27:25.300 "utilization": 0.0 00:27:25.300 } 00:27:25.300 ], 00:27:25.300 "read-only": true 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "name": "verbose_mode", 00:27:25.300 "value": true, 00:27:25.300 "unit": "", 00:27:25.300 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:25.300 }, 00:27:25.300 { 00:27:25.300 "name": "prep_upgrade_on_shutdown", 00:27:25.300 "value": false, 00:27:25.300 "unit": "", 00:27:25.300 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:25.300 } 00:27:25.300 ] 00:27:25.300 } 00:27:25.300 17:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:25.557 [2024-07-15 17:33:36.259656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.557 [2024-07-15 17:33:36.259744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:25.558 [2024-07-15 17:33:36.259766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:25.558 [2024-07-15 17:33:36.259779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.558 [2024-07-15 17:33:36.259818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.558 [2024-07-15 17:33:36.259834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:25.558 [2024-07-15 17:33:36.259847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:25.558 [2024-07-15 17:33:36.259859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.558 [2024-07-15 17:33:36.259886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.558 [2024-07-15 17:33:36.259900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:25.558 [2024-07-15 17:33:36.259912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:25.558 [2024-07-15 17:33:36.259925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.558 [2024-07-15 17:33:36.260009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.346 ms, result 0 00:27:25.558 true 00:27:25.558 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:25.558 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:25.558 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:25.815 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:25.815 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:25.815 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:26.072 [2024-07-15 17:33:36.760250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.072 [2024-07-15 17:33:36.760338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:26.072 [2024-07-15 17:33:36.760374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:26.072 [2024-07-15 17:33:36.760390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.072 [2024-07-15 17:33:36.760432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.072 [2024-07-15 17:33:36.760448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:26.072 [2024-07-15 17:33:36.760462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:26.072 [2024-07-15 17:33:36.760473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.072 [2024-07-15 17:33:36.760501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.072 [2024-07-15 17:33:36.760515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:26.072 [2024-07-15 17:33:36.760527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:26.072 [2024-07-15 17:33:36.760538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.072 [2024-07-15 17:33:36.760623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.366 ms, result 0 00:27:26.072 true 00:27:26.072 17:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:26.330 { 00:27:26.330 "name": "ftl", 00:27:26.330 "properties": [ 00:27:26.330 { 00:27:26.330 "name": "superblock_version", 00:27:26.330 "value": 5, 00:27:26.330 "read-only": true 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "name": "base_device", 00:27:26.330 "bands": [ 00:27:26.330 { 00:27:26.330 "id": 0, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 1, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 2, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 3, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 4, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 5, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 6, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 7, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 8, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 9, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 10, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 11, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 12, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.330 }, 00:27:26.330 { 00:27:26.330 "id": 13, 00:27:26.330 "state": "FREE", 00:27:26.330 "validity": 0.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 14, 00:27:26.331 "state": "FREE", 00:27:26.331 "validity": 0.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 15, 00:27:26.331 "state": "FREE", 00:27:26.331 "validity": 0.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 16, 00:27:26.331 "state": "FREE", 00:27:26.331 "validity": 0.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 17, 00:27:26.331 "state": "FREE", 00:27:26.331 "validity": 0.0 00:27:26.331 } 00:27:26.331 ], 00:27:26.331 "read-only": true 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "name": "cache_device", 00:27:26.331 "type": "bdev", 00:27:26.331 "chunks": [ 00:27:26.331 { 00:27:26.331 "id": 0, 00:27:26.331 "state": "INACTIVE", 00:27:26.331 "utilization": 0.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 1, 00:27:26.331 "state": "CLOSED", 00:27:26.331 "utilization": 1.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 2, 00:27:26.331 "state": "CLOSED", 00:27:26.331 "utilization": 1.0 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 3, 00:27:26.331 "state": "OPEN", 00:27:26.331 "utilization": 0.001953125 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "id": 4, 00:27:26.331 "state": "OPEN", 00:27:26.331 "utilization": 0.0 00:27:26.331 } 00:27:26.331 ], 00:27:26.331 "read-only": true 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "name": "verbose_mode", 00:27:26.331 "value": true, 00:27:26.331 "unit": "", 00:27:26.331 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:26.331 }, 00:27:26.331 { 00:27:26.331 "name": "prep_upgrade_on_shutdown", 00:27:26.331 "value": true, 00:27:26.331 "unit": "", 00:27:26.331 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:26.331 } 00:27:26.331 ] 00:27:26.331 } 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 97762 ]] 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 97762 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 97762 ']' 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 97762 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97762 00:27:26.331 killing process with pid 97762 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97762' 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 97762 00:27:26.331 17:33:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 97762 00:27:26.589 [2024-07-15 17:33:37.325570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:26.589 [2024-07-15 17:33:37.333971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.589 [2024-07-15 17:33:37.334030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:26.589 [2024-07-15 17:33:37.334052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:26.589 [2024-07-15 17:33:37.334066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.589 [2024-07-15 17:33:37.334098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:26.589 [2024-07-15 17:33:37.335382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.589 [2024-07-15 17:33:37.335411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:26.589 [2024-07-15 17:33:37.335427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:27:26.589 [2024-07-15 17:33:37.335439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.031123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.031225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:36.616 [2024-07-15 17:33:46.031250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8695.667 ms 00:27:36.616 [2024-07-15 17:33:46.031271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.032580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.032620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:36.616 [2024-07-15 17:33:46.032649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.282 ms 00:27:36.616 [2024-07-15 17:33:46.032663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.033882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.033919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:36.616 [2024-07-15 17:33:46.033935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.177 ms 00:27:36.616 [2024-07-15 17:33:46.033946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.036246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.036285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:36.616 [2024-07-15 17:33:46.036300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.244 ms 00:27:36.616 [2024-07-15 17:33:46.036312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.038869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.038913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:36.616 [2024-07-15 17:33:46.038930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.517 ms 00:27:36.616 [2024-07-15 17:33:46.038942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.039032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.039051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:36.616 [2024-07-15 17:33:46.039072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:36.616 [2024-07-15 17:33:46.039085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.040262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.040298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:36.616 [2024-07-15 17:33:46.040313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:27:36.616 [2024-07-15 17:33:46.040325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.041635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.041673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:36.616 [2024-07-15 17:33:46.041688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.261 ms 00:27:36.616 [2024-07-15 17:33:46.041698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.042933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.042970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:36.616 [2024-07-15 17:33:46.042985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:27:36.616 [2024-07-15 17:33:46.042996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.044441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.044519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:36.616 [2024-07-15 17:33:46.044536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.371 ms 00:27:36.616 [2024-07-15 17:33:46.044548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.616 [2024-07-15 17:33:46.044588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:36.616 [2024-07-15 17:33:46.044612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.616 [2024-07-15 17:33:46.044645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.616 [2024-07-15 17:33:46.044659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:36.616 [2024-07-15 17:33:46.044672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:36.616 [2024-07-15 17:33:46.044867] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:36.616 [2024-07-15 17:33:46.044879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 652d2bfd-334e-4b2e-972b-8a875aca5eec 00:27:36.616 [2024-07-15 17:33:46.044893] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:36.616 [2024-07-15 17:33:46.044904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:36.616 [2024-07-15 17:33:46.044916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:36.616 [2024-07-15 17:33:46.044929] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:36.616 [2024-07-15 17:33:46.044941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:36.616 [2024-07-15 17:33:46.044959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:36.616 [2024-07-15 17:33:46.044971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:36.616 [2024-07-15 17:33:46.044982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:36.616 [2024-07-15 17:33:46.044992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:36.616 [2024-07-15 17:33:46.045005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.616 [2024-07-15 17:33:46.045017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:36.617 [2024-07-15 17:33:46.045030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:27:36.617 [2024-07-15 17:33:46.045043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.048143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.617 [2024-07-15 17:33:46.048179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:36.617 [2024-07-15 17:33:46.048195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.056 ms 00:27:36.617 [2024-07-15 17:33:46.048215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.048437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.617 [2024-07-15 17:33:46.048455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:36.617 [2024-07-15 17:33:46.048468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:27:36.617 [2024-07-15 17:33:46.048479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.060546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.060626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:36.617 [2024-07-15 17:33:46.060644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.060697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.060751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.060767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:36.617 [2024-07-15 17:33:46.060781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.060793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.060910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.060930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:36.617 [2024-07-15 17:33:46.061003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.061032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.061079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.061109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:36.617 [2024-07-15 17:33:46.061122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.061134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.081643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.081972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:36.617 [2024-07-15 17:33:46.082089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.082155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.096764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:36.617 [2024-07-15 17:33:46.097154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.617 [2024-07-15 17:33:46.097345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.617 [2024-07-15 17:33:46.097521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.617 [2024-07-15 17:33:46.097676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:36.617 [2024-07-15 17:33:46.097781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.097893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.617 [2024-07-15 17:33:46.097906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.097917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.097980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.617 [2024-07-15 17:33:46.098003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.617 [2024-07-15 17:33:46.098016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.617 [2024-07-15 17:33:46.098027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.617 [2024-07-15 17:33:46.098231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8764.252 ms, result 0 00:27:38.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=98290 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 98290 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 98290 ']' 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.516 17:33:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:38.516 [2024-07-15 17:33:49.184285] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:38.516 [2024-07-15 17:33:49.185115] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98290 ] 00:27:38.516 [2024-07-15 17:33:49.329317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:38.517 [2024-07-15 17:33:49.352811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.774 [2024-07-15 17:33:49.495595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.340 [2024-07-15 17:33:49.955187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:39.340 [2024-07-15 17:33:49.955713] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:39.340 [2024-07-15 17:33:50.109631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.340 [2024-07-15 17:33:50.110005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:39.340 [2024-07-15 17:33:50.110152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:39.340 [2024-07-15 17:33:50.110224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.340 [2024-07-15 17:33:50.110482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.340 [2024-07-15 17:33:50.110640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:39.340 [2024-07-15 17:33:50.110796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:27:39.340 [2024-07-15 17:33:50.110929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.340 [2024-07-15 17:33:50.111024] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:39.340 [2024-07-15 17:33:50.111616] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:39.341 [2024-07-15 17:33:50.111659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.111677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:39.341 [2024-07-15 17:33:50.111693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:27:39.341 [2024-07-15 17:33:50.111707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.114356] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:39.341 [2024-07-15 17:33:50.118185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.118246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:39.341 [2024-07-15 17:33:50.118266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.838 ms 00:27:39.341 [2024-07-15 17:33:50.118280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.118386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.118412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:39.341 [2024-07-15 17:33:50.118428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:39.341 [2024-07-15 17:33:50.118449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.131541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.131692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:39.341 [2024-07-15 17:33:50.131717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.974 ms 00:27:39.341 [2024-07-15 17:33:50.131733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.131848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.131873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:39.341 [2024-07-15 17:33:50.131906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:39.341 [2024-07-15 17:33:50.131922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.132074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.132097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:39.341 [2024-07-15 17:33:50.132114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:39.341 [2024-07-15 17:33:50.132128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.132180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:39.341 [2024-07-15 17:33:50.135197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.135246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:39.341 [2024-07-15 17:33:50.135266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.033 ms 00:27:39.341 [2024-07-15 17:33:50.135280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.135376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.135401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:39.341 [2024-07-15 17:33:50.135417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:39.341 [2024-07-15 17:33:50.135431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.135474] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:39.341 [2024-07-15 17:33:50.135516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:39.341 [2024-07-15 17:33:50.135581] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:39.341 [2024-07-15 17:33:50.135612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:39.341 [2024-07-15 17:33:50.135726] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:39.341 [2024-07-15 17:33:50.135745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:39.341 [2024-07-15 17:33:50.135764] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:39.341 [2024-07-15 17:33:50.135788] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:39.341 [2024-07-15 17:33:50.135808] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:39.341 [2024-07-15 17:33:50.135822] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:39.341 [2024-07-15 17:33:50.135836] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:39.341 [2024-07-15 17:33:50.135854] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:39.341 [2024-07-15 17:33:50.135867] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:39.341 [2024-07-15 17:33:50.135882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.135896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:39.341 [2024-07-15 17:33:50.135918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:27:39.341 [2024-07-15 17:33:50.135932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.136037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.341 [2024-07-15 17:33:50.136065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:39.341 [2024-07-15 17:33:50.136081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:27:39.341 [2024-07-15 17:33:50.136109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.341 [2024-07-15 17:33:50.136272] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:39.341 [2024-07-15 17:33:50.136297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:39.341 [2024-07-15 17:33:50.136320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:39.341 [2024-07-15 17:33:50.136384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:39.341 [2024-07-15 17:33:50.136414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:39.341 [2024-07-15 17:33:50.136427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:39.341 [2024-07-15 17:33:50.136440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:39.341 [2024-07-15 17:33:50.136466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:39.341 [2024-07-15 17:33:50.136480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:39.341 [2024-07-15 17:33:50.136505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:39.341 [2024-07-15 17:33:50.136518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:39.341 [2024-07-15 17:33:50.136543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:39.341 [2024-07-15 17:33:50.136562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:39.341 [2024-07-15 17:33:50.136593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:39.341 [2024-07-15 17:33:50.136606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:39.341 [2024-07-15 17:33:50.136633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:39.341 [2024-07-15 17:33:50.136645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:39.341 [2024-07-15 17:33:50.136671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:39.341 [2024-07-15 17:33:50.136683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:39.341 [2024-07-15 17:33:50.136708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:39.341 [2024-07-15 17:33:50.136722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:39.341 [2024-07-15 17:33:50.136748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:39.341 [2024-07-15 17:33:50.136761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:39.341 [2024-07-15 17:33:50.136799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:39.341 [2024-07-15 17:33:50.136839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:39.341 [2024-07-15 17:33:50.136877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:39.341 [2024-07-15 17:33:50.136890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136902] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:39.341 [2024-07-15 17:33:50.136916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:39.341 [2024-07-15 17:33:50.136929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:39.341 [2024-07-15 17:33:50.136962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.341 [2024-07-15 17:33:50.136977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:39.341 [2024-07-15 17:33:50.136990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:39.341 [2024-07-15 17:33:50.137003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:39.341 [2024-07-15 17:33:50.137025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:39.341 [2024-07-15 17:33:50.137040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:39.341 [2024-07-15 17:33:50.137055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:39.341 [2024-07-15 17:33:50.137071] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:39.341 [2024-07-15 17:33:50.137095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.341 [2024-07-15 17:33:50.137111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:39.342 [2024-07-15 17:33:50.137125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:39.342 [2024-07-15 17:33:50.137168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:39.342 [2024-07-15 17:33:50.137181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:39.342 [2024-07-15 17:33:50.137195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:39.342 [2024-07-15 17:33:50.137209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:39.342 [2024-07-15 17:33:50.137315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:39.342 [2024-07-15 17:33:50.137343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.342 [2024-07-15 17:33:50.137404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:39.342 [2024-07-15 17:33:50.137419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:39.342 [2024-07-15 17:33:50.137433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:39.342 [2024-07-15 17:33:50.137449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.342 [2024-07-15 17:33:50.137464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:39.342 [2024-07-15 17:33:50.137479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.243 ms 00:27:39.342 [2024-07-15 17:33:50.137493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.342 [2024-07-15 17:33:50.137582] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:39.342 [2024-07-15 17:33:50.137606] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:42.643 [2024-07-15 17:33:53.101084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.101551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:42.643 [2024-07-15 17:33:53.101727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2963.510 ms 00:27:42.643 [2024-07-15 17:33:53.101856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.122954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.123465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:42.643 [2024-07-15 17:33:53.123621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.851 ms 00:27:42.643 [2024-07-15 17:33:53.123683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.124020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.124067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:42.643 [2024-07-15 17:33:53.124085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:42.643 [2024-07-15 17:33:53.124100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.142797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.142885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:42.643 [2024-07-15 17:33:53.142910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.620 ms 00:27:42.643 [2024-07-15 17:33:53.142925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.143036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.143057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:42.643 [2024-07-15 17:33:53.143074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:42.643 [2024-07-15 17:33:53.143088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.144013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.144055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:42.643 [2024-07-15 17:33:53.144074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.823 ms 00:27:42.643 [2024-07-15 17:33:53.144089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.144178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.144208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:42.643 [2024-07-15 17:33:53.144225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:42.643 [2024-07-15 17:33:53.144239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.157605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.157662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:42.643 [2024-07-15 17:33:53.157684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.326 ms 00:27:42.643 [2024-07-15 17:33:53.157698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.161886] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:42.643 [2024-07-15 17:33:53.161939] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:42.643 [2024-07-15 17:33:53.161978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.643 [2024-07-15 17:33:53.161992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:42.643 [2024-07-15 17:33:53.162007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.055 ms 00:27:42.643 [2024-07-15 17:33:53.162020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.643 [2024-07-15 17:33:53.166698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.166746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:42.644 [2024-07-15 17:33:53.166765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.612 ms 00:27:42.644 [2024-07-15 17:33:53.166779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.168622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.168700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:42.644 [2024-07-15 17:33:53.168719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.785 ms 00:27:42.644 [2024-07-15 17:33:53.168732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.170392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.170448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:42.644 [2024-07-15 17:33:53.170465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.607 ms 00:27:42.644 [2024-07-15 17:33:53.170478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.170908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.170948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:42.644 [2024-07-15 17:33:53.170973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:27:42.644 [2024-07-15 17:33:53.170988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.212949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.213092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:42.644 [2024-07-15 17:33:53.213120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.894 ms 00:27:42.644 [2024-07-15 17:33:53.213134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.221840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:42.644 [2024-07-15 17:33:53.222744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.222785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:42.644 [2024-07-15 17:33:53.222806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.504 ms 00:27:42.644 [2024-07-15 17:33:53.222824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.222958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.223011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:42.644 [2024-07-15 17:33:53.223060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:42.644 [2024-07-15 17:33:53.223080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.223164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.223198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:42.644 [2024-07-15 17:33:53.223214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:42.644 [2024-07-15 17:33:53.223226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.223271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.223290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:42.644 [2024-07-15 17:33:53.223304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:42.644 [2024-07-15 17:33:53.223317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.223420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:42.644 [2024-07-15 17:33:53.223452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.223466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:42.644 [2024-07-15 17:33:53.223480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:42.644 [2024-07-15 17:33:53.223494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.228144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.228192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:42.644 [2024-07-15 17:33:53.228228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.611 ms 00:27:42.644 [2024-07-15 17:33:53.228242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.228335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.644 [2024-07-15 17:33:53.228385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:42.644 [2024-07-15 17:33:53.228403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:42.644 [2024-07-15 17:33:53.228416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.644 [2024-07-15 17:33:53.230221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3119.989 ms, result 0 00:27:42.644 [2024-07-15 17:33:53.243343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.644 [2024-07-15 17:33:53.259331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:42.644 [2024-07-15 17:33:53.267522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.211 17:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.211 17:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:43.211 17:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:43.211 17:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:43.211 17:33:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:43.211 [2024-07-15 17:33:53.996045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.211 [2024-07-15 17:33:53.996442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:43.211 [2024-07-15 17:33:53.996616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:43.211 [2024-07-15 17:33:53.996679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.211 [2024-07-15 17:33:53.996777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.211 [2024-07-15 17:33:53.996920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:43.211 [2024-07-15 17:33:53.996978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:43.211 [2024-07-15 17:33:53.997036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.211 [2024-07-15 17:33:53.997159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.211 [2024-07-15 17:33:53.997394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:43.211 [2024-07-15 17:33:53.997467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:43.211 [2024-07-15 17:33:53.997516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.211 [2024-07-15 17:33:53.997840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.762 ms, result 0 00:27:43.211 true 00:27:43.211 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.469 { 00:27:43.469 "name": "ftl", 00:27:43.469 "properties": [ 00:27:43.469 { 00:27:43.469 "name": "superblock_version", 00:27:43.469 "value": 5, 00:27:43.469 "read-only": true 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "name": "base_device", 00:27:43.469 "bands": [ 00:27:43.469 { 00:27:43.469 "id": 0, 00:27:43.469 "state": "CLOSED", 00:27:43.469 "validity": 1.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 1, 00:27:43.469 "state": "CLOSED", 00:27:43.469 "validity": 1.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 2, 00:27:43.469 "state": "CLOSED", 00:27:43.469 "validity": 0.007843137254901933 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 3, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 4, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 5, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 6, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 7, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 8, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 9, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 10, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 11, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 12, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 13, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 14, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 15, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 16, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 17, 00:27:43.469 "state": "FREE", 00:27:43.469 "validity": 0.0 00:27:43.469 } 00:27:43.469 ], 00:27:43.469 "read-only": true 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "name": "cache_device", 00:27:43.469 "type": "bdev", 00:27:43.469 "chunks": [ 00:27:43.469 { 00:27:43.469 "id": 0, 00:27:43.469 "state": "INACTIVE", 00:27:43.469 "utilization": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 1, 00:27:43.469 "state": "OPEN", 00:27:43.469 "utilization": 0.0 00:27:43.469 }, 00:27:43.469 { 00:27:43.469 "id": 2, 00:27:43.469 "state": "OPEN", 00:27:43.469 "utilization": 0.0 00:27:43.470 }, 00:27:43.470 { 00:27:43.470 "id": 3, 00:27:43.470 "state": "FREE", 00:27:43.470 "utilization": 0.0 00:27:43.470 }, 00:27:43.470 { 00:27:43.470 "id": 4, 00:27:43.470 "state": "FREE", 00:27:43.470 "utilization": 0.0 00:27:43.470 } 00:27:43.470 ], 00:27:43.470 "read-only": true 00:27:43.470 }, 00:27:43.470 { 00:27:43.470 "name": "verbose_mode", 00:27:43.470 "value": true, 00:27:43.470 "unit": "", 00:27:43.470 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:43.470 }, 00:27:43.470 { 00:27:43.470 "name": "prep_upgrade_on_shutdown", 00:27:43.470 "value": false, 00:27:43.470 "unit": "", 00:27:43.470 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:43.470 } 00:27:43.470 ] 00:27:43.470 } 00:27:43.470 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:43.470 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:43.470 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.727 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:43.727 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:43.727 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:43.727 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.727 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:43.985 Validate MD5 checksum, iteration 1 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:43.985 17:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:44.243 [2024-07-15 17:33:54.893091] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:44.243 [2024-07-15 17:33:54.893503] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98360 ] 00:27:44.243 [2024-07-15 17:33:55.035777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:44.243 [2024-07-15 17:33:55.056534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.499 [2024-07-15 17:33:55.160893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.316  Copying: 391/1024 [MB] (391 MBps) Copying: 796/1024 [MB] (405 MBps) Copying: 1024/1024 [MB] (average 411 MBps) 00:27:48.316 00:27:48.316 17:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:48.316 17:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:50.215 Validate MD5 checksum, iteration 2 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6e985afb9e9b63d47e45a0bfb130bae2 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6e985afb9e9b63d47e45a0bfb130bae2 != \6\e\9\8\5\a\f\b\9\e\9\b\6\3\d\4\7\e\4\5\a\0\b\f\b\1\3\0\b\a\e\2 ]] 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:50.215 17:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.472 [2024-07-15 17:34:01.150409] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:50.472 [2024-07-15 17:34:01.150638] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98432 ] 00:27:50.472 [2024-07-15 17:34:01.319645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:50.731 [2024-07-15 17:34:01.337722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.731 [2024-07-15 17:34:01.437899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.544  Copying: 412/1024 [MB] (412 MBps) Copying: 828/1024 [MB] (416 MBps) Copying: 1024/1024 [MB] (average 414 MBps) 00:27:55.544 00:27:55.544 17:34:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:55.544 17:34:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a417e35825c5c96f10cff438a4dad7e2 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a417e35825c5c96f10cff438a4dad7e2 != \a\4\1\7\e\3\5\8\2\5\c\5\c\9\6\f\1\0\c\f\f\4\3\8\a\4\d\a\d\7\e\2 ]] 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 98290 ]] 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 98290 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=98510 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 98510 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 98510 ']' 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.070 17:34:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.070 [2024-07-15 17:34:08.449342] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:58.070 [2024-07-15 17:34:08.450553] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98510 ] 00:27:58.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 98290 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:58.070 [2024-07-15 17:34:08.603131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:58.070 [2024-07-15 17:34:08.624207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.070 [2024-07-15 17:34:08.730648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.328 [2024-07-15 17:34:09.096752] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.328 [2024-07-15 17:34:09.096848] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.587 [2024-07-15 17:34:09.250065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.587 [2024-07-15 17:34:09.250136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:58.587 [2024-07-15 17:34:09.250161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:58.587 [2024-07-15 17:34:09.250184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.587 [2024-07-15 17:34:09.250303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.587 [2024-07-15 17:34:09.250329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:58.587 [2024-07-15 17:34:09.250346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:27:58.587 [2024-07-15 17:34:09.250391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.587 [2024-07-15 17:34:09.250435] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:58.587 [2024-07-15 17:34:09.250858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:58.587 [2024-07-15 17:34:09.250905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.587 [2024-07-15 17:34:09.250924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:58.587 [2024-07-15 17:34:09.250940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:27:58.587 [2024-07-15 17:34:09.250954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.587 [2024-07-15 17:34:09.251882] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:58.587 [2024-07-15 17:34:09.257219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.587 [2024-07-15 17:34:09.257447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:58.587 [2024-07-15 17:34:09.257612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.345 ms 00:27:58.588 [2024-07-15 17:34:09.257644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.258848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.258886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:58.588 [2024-07-15 17:34:09.258905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:58.588 [2024-07-15 17:34:09.258926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.259665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.259866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:58.588 [2024-07-15 17:34:09.260012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:27:58.588 [2024-07-15 17:34:09.260171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.260295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.260450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:58.588 [2024-07-15 17:34:09.260604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:58.588 [2024-07-15 17:34:09.260768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.260896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.261052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:58.588 [2024-07-15 17:34:09.261192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:58.588 [2024-07-15 17:34:09.261256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.261460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:58.588 [2024-07-15 17:34:09.262793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.262954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:58.588 [2024-07-15 17:34:09.263086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:27:58.588 [2024-07-15 17:34:09.263174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.263277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.263448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:58.588 [2024-07-15 17:34:09.263515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:58.588 [2024-07-15 17:34:09.263560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.263733] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:58.588 [2024-07-15 17:34:09.263800] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:58.588 [2024-07-15 17:34:09.263857] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:58.588 [2024-07-15 17:34:09.263883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:58.588 [2024-07-15 17:34:09.264011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:58.588 [2024-07-15 17:34:09.264035] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:58.588 [2024-07-15 17:34:09.264060] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:58.588 [2024-07-15 17:34:09.264080] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:58.588 [2024-07-15 17:34:09.264097] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:58.588 [2024-07-15 17:34:09.264112] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:58.588 [2024-07-15 17:34:09.264126] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:58.588 [2024-07-15 17:34:09.264144] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:58.588 [2024-07-15 17:34:09.264158] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:58.588 [2024-07-15 17:34:09.264180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.264205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:58.588 [2024-07-15 17:34:09.264221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:27:58.588 [2024-07-15 17:34:09.264246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.264403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.588 [2024-07-15 17:34:09.264425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:58.588 [2024-07-15 17:34:09.264441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.107 ms 00:27:58.588 [2024-07-15 17:34:09.264454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.588 [2024-07-15 17:34:09.264621] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:58.588 [2024-07-15 17:34:09.264642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:58.588 [2024-07-15 17:34:09.264658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.588 [2024-07-15 17:34:09.264672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:58.588 [2024-07-15 17:34:09.264698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:58.588 [2024-07-15 17:34:09.264737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:58.588 [2024-07-15 17:34:09.264758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:58.588 [2024-07-15 17:34:09.264771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:58.588 [2024-07-15 17:34:09.264797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:58.588 [2024-07-15 17:34:09.264809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:58.588 [2024-07-15 17:34:09.264835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:58.588 [2024-07-15 17:34:09.264847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:58.588 [2024-07-15 17:34:09.264879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:58.588 [2024-07-15 17:34:09.264891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.264904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:58.588 [2024-07-15 17:34:09.264917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:58.588 [2024-07-15 17:34:09.264929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.588 [2024-07-15 17:34:09.264942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:58.588 [2024-07-15 17:34:09.264955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:58.588 [2024-07-15 17:34:09.264967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.588 [2024-07-15 17:34:09.264980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:58.588 [2024-07-15 17:34:09.264993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:58.588 [2024-07-15 17:34:09.265005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.588 [2024-07-15 17:34:09.265018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:58.588 [2024-07-15 17:34:09.265031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:58.588 [2024-07-15 17:34:09.265043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.588 [2024-07-15 17:34:09.265055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:58.588 [2024-07-15 17:34:09.265072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:58.588 [2024-07-15 17:34:09.265085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:58.588 [2024-07-15 17:34:09.265111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:58.588 [2024-07-15 17:34:09.265123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:58.588 [2024-07-15 17:34:09.265149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:58.588 [2024-07-15 17:34:09.265187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:58.588 [2024-07-15 17:34:09.265199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265215] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:58.588 [2024-07-15 17:34:09.265244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:58.588 [2024-07-15 17:34:09.265260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.588 [2024-07-15 17:34:09.265275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.588 [2024-07-15 17:34:09.265289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:58.588 [2024-07-15 17:34:09.265311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:58.588 [2024-07-15 17:34:09.265325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:58.588 [2024-07-15 17:34:09.265339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:58.588 [2024-07-15 17:34:09.265352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:58.588 [2024-07-15 17:34:09.265397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:58.588 [2024-07-15 17:34:09.265416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:58.588 [2024-07-15 17:34:09.265447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.588 [2024-07-15 17:34:09.265468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:58.588 [2024-07-15 17:34:09.265484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:58.588 [2024-07-15 17:34:09.265498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:58.588 [2024-07-15 17:34:09.265512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:58.588 [2024-07-15 17:34:09.265527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:58.588 [2024-07-15 17:34:09.265541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:58.588 [2024-07-15 17:34:09.265555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:58.589 [2024-07-15 17:34:09.265569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:58.589 [2024-07-15 17:34:09.265674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:58.589 [2024-07-15 17:34:09.265690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.589 [2024-07-15 17:34:09.265720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:58.589 [2024-07-15 17:34:09.265739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:58.589 [2024-07-15 17:34:09.265753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:58.589 [2024-07-15 17:34:09.265769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.265783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:58.589 [2024-07-15 17:34:09.265800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.234 ms 00:27:58.589 [2024-07-15 17:34:09.265815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.278984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.279061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.589 [2024-07-15 17:34:09.279086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.074 ms 00:27:58.589 [2024-07-15 17:34:09.279108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.279192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.279212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.589 [2024-07-15 17:34:09.279245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:58.589 [2024-07-15 17:34:09.279265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.293782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.293857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.589 [2024-07-15 17:34:09.293882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.357 ms 00:27:58.589 [2024-07-15 17:34:09.293905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.293999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.294019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.589 [2024-07-15 17:34:09.294036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:58.589 [2024-07-15 17:34:09.294050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.294244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.294268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.589 [2024-07-15 17:34:09.294284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:27:58.589 [2024-07-15 17:34:09.294299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.294405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.294432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.589 [2024-07-15 17:34:09.294463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:58.589 [2024-07-15 17:34:09.294478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.304522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.304603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.589 [2024-07-15 17:34:09.304628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.001 ms 00:27:58.589 [2024-07-15 17:34:09.304643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.304871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.304905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:58.589 [2024-07-15 17:34:09.304922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:58.589 [2024-07-15 17:34:09.304947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.318268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.318341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:58.589 [2024-07-15 17:34:09.318392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.268 ms 00:27:58.589 [2024-07-15 17:34:09.318411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.320610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.320665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.589 [2024-07-15 17:34:09.320686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:27:58.589 [2024-07-15 17:34:09.320701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.343691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.343771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.589 [2024-07-15 17:34:09.343794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.921 ms 00:27:58.589 [2024-07-15 17:34:09.343824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.344056] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.589 [2024-07-15 17:34:09.344196] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.589 [2024-07-15 17:34:09.344323] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.589 [2024-07-15 17:34:09.344489] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.589 [2024-07-15 17:34:09.344513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.344527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.589 [2024-07-15 17:34:09.344541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:27:58.589 [2024-07-15 17:34:09.344553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.344663] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.589 [2024-07-15 17:34:09.344685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.344697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.589 [2024-07-15 17:34:09.344710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:58.589 [2024-07-15 17:34:09.344721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.347663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.347709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.589 [2024-07-15 17:34:09.347727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.897 ms 00:27:58.589 [2024-07-15 17:34:09.347739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.348488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.589 [2024-07-15 17:34:09.348519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.589 [2024-07-15 17:34:09.348534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:58.589 [2024-07-15 17:34:09.348546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.589 [2024-07-15 17:34:09.348848] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:59.187 [2024-07-15 17:34:09.905947] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:59.187 [2024-07-15 17:34:09.906155] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:59.754 [2024-07-15 17:34:10.470405] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:59.754 [2024-07-15 17:34:10.470543] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.754 [2024-07-15 17:34:10.470568] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.754 [2024-07-15 17:34:10.470586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.470601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.754 [2024-07-15 17:34:10.470619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1121.972 ms 00:27:59.754 [2024-07-15 17:34:10.470631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.470709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.470740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.754 [2024-07-15 17:34:10.470754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.754 [2024-07-15 17:34:10.470765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.479877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.754 [2024-07-15 17:34:10.480045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.480075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.754 [2024-07-15 17:34:10.480090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.255 ms 00:27:59.754 [2024-07-15 17:34:10.480103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.480931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.480961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:59.754 [2024-07-15 17:34:10.480976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.707 ms 00:27:59.754 [2024-07-15 17:34:10.480989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.483433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.483464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.754 [2024-07-15 17:34:10.483479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.417 ms 00:27:59.754 [2024-07-15 17:34:10.483497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.483565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.483582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:59.754 [2024-07-15 17:34:10.483596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.754 [2024-07-15 17:34:10.483607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.483774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.483793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.754 [2024-07-15 17:34:10.483807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:59.754 [2024-07-15 17:34:10.483819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.483860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.483875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.754 [2024-07-15 17:34:10.483888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.754 [2024-07-15 17:34:10.483900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.483944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.754 [2024-07-15 17:34:10.483962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.483975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.754 [2024-07-15 17:34:10.483988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:59.754 [2024-07-15 17:34:10.484000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.484085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.754 [2024-07-15 17:34:10.484109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.754 [2024-07-15 17:34:10.484122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:59.754 [2024-07-15 17:34:10.484134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.754 [2024-07-15 17:34:10.485763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1235.169 ms, result 0 00:27:59.754 [2024-07-15 17:34:10.501043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.754 [2024-07-15 17:34:10.517068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.754 [2024-07-15 17:34:10.525174] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.321 Validate MD5 checksum, iteration 1 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:00.321 17:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.579 [2024-07-15 17:34:11.249133] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:00.579 [2024-07-15 17:34:11.249629] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98539 ] 00:28:00.579 [2024-07-15 17:34:11.404652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:00.579 [2024-07-15 17:34:11.426851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.837 [2024-07-15 17:34:11.535841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.325  Copying: 490/1024 [MB] (490 MBps) Copying: 941/1024 [MB] (451 MBps) Copying: 1024/1024 [MB] (average 466 MBps) 00:28:05.325 00:28:05.325 17:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:05.325 17:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:07.224 Validate MD5 checksum, iteration 2 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6e985afb9e9b63d47e45a0bfb130bae2 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6e985afb9e9b63d47e45a0bfb130bae2 != \6\e\9\8\5\a\f\b\9\e\9\b\6\3\d\4\7\e\4\5\a\0\b\f\b\1\3\0\b\a\e\2 ]] 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:07.224 17:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.482 [2024-07-15 17:34:18.103248] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:07.482 [2024-07-15 17:34:18.103539] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98612 ] 00:28:07.482 [2024-07-15 17:34:18.267250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:07.482 [2024-07-15 17:34:18.286675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.739 [2024-07-15 17:34:18.400669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.544  Copying: 409/1024 [MB] (409 MBps) Copying: 823/1024 [MB] (414 MBps) Copying: 1024/1024 [MB] (average 398 MBps) 00:28:11.544 00:28:11.544 17:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:11.544 17:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a417e35825c5c96f10cff438a4dad7e2 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a417e35825c5c96f10cff438a4dad7e2 != \a\4\1\7\e\3\5\8\2\5\c\5\c\9\6\f\1\0\c\f\f\4\3\8\a\4\d\a\d\7\e\2 ]] 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 98510 ]] 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 98510 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 98510 ']' 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 98510 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98510 00:28:14.073 killing process with pid 98510 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98510' 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 98510 00:28:14.073 17:34:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 98510 00:28:14.073 [2024-07-15 17:34:24.909327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:14.073 [2024-07-15 17:34:24.917957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.918021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:14.073 [2024-07-15 17:34:24.918044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:14.073 [2024-07-15 17:34:24.918058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.918096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:14.073 [2024-07-15 17:34:24.919442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.919474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:14.073 [2024-07-15 17:34:24.919492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.320 ms 00:28:14.073 [2024-07-15 17:34:24.919504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.919825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.919852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:14.073 [2024-07-15 17:34:24.919867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:28:14.073 [2024-07-15 17:34:24.919880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.921224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.921267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:14.073 [2024-07-15 17:34:24.921285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.319 ms 00:28:14.073 [2024-07-15 17:34:24.921312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.922636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.922671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:14.073 [2024-07-15 17:34:24.922693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.234 ms 00:28:14.073 [2024-07-15 17:34:24.922706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.924209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.924254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:14.073 [2024-07-15 17:34:24.924271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.429 ms 00:28:14.073 [2024-07-15 17:34:24.924285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.925816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.925996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:14.073 [2024-07-15 17:34:24.926024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:28:14.073 [2024-07-15 17:34:24.926040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.926144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.926165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:14.073 [2024-07-15 17:34:24.926180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:14.073 [2024-07-15 17:34:24.926200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.927431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.927461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:14.073 [2024-07-15 17:34:24.927476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:28:14.073 [2024-07-15 17:34:24.927487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.073 [2024-07-15 17:34:24.928790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.073 [2024-07-15 17:34:24.928830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:14.073 [2024-07-15 17:34:24.928846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:28:14.073 [2024-07-15 17:34:24.928857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.330 [2024-07-15 17:34:24.930034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.330 [2024-07-15 17:34:24.930075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:14.330 [2024-07-15 17:34:24.930091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:28:14.330 [2024-07-15 17:34:24.930102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.330 [2024-07-15 17:34:24.931194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.330 [2024-07-15 17:34:24.931235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:14.330 [2024-07-15 17:34:24.931251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.013 ms 00:28:14.330 [2024-07-15 17:34:24.931262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.330 [2024-07-15 17:34:24.931303] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:14.330 [2024-07-15 17:34:24.931329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.330 [2024-07-15 17:34:24.931344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.330 [2024-07-15 17:34:24.931369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:14.330 [2024-07-15 17:34:24.931386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.330 [2024-07-15 17:34:24.931493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.331 [2024-07-15 17:34:24.931606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:14.331 [2024-07-15 17:34:24.931623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 652d2bfd-334e-4b2e-972b-8a875aca5eec 00:28:14.331 [2024-07-15 17:34:24.931637] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:14.331 [2024-07-15 17:34:24.931649] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:14.331 [2024-07-15 17:34:24.931661] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:14.331 [2024-07-15 17:34:24.931674] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:14.331 [2024-07-15 17:34:24.931685] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:14.331 [2024-07-15 17:34:24.931697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:14.331 [2024-07-15 17:34:24.931709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:14.331 [2024-07-15 17:34:24.931720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:14.331 [2024-07-15 17:34:24.931730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:14.331 [2024-07-15 17:34:24.931743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.331 [2024-07-15 17:34:24.931755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:14.331 [2024-07-15 17:34:24.931768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.442 ms 00:28:14.331 [2024-07-15 17:34:24.931781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.935063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.331 [2024-07-15 17:34:24.935222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:14.331 [2024-07-15 17:34:24.935342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.251 ms 00:28:14.331 [2024-07-15 17:34:24.935475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.935721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.331 [2024-07-15 17:34:24.935836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:14.331 [2024-07-15 17:34:24.935989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.167 ms 00:28:14.331 [2024-07-15 17:34:24.936055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.948165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.948441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:14.331 [2024-07-15 17:34:24.948565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.948617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.948717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.948833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:14.331 [2024-07-15 17:34:24.948886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.948924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.949127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.949154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:14.331 [2024-07-15 17:34:24.949169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.949181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.949212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.949228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:14.331 [2024-07-15 17:34:24.949240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.949252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.973832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.973933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:14.331 [2024-07-15 17:34:24.973956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.973969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.988953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:14.331 [2024-07-15 17:34:24.989079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.989264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:14.331 [2024-07-15 17:34:24.989299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.989464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:14.331 [2024-07-15 17:34:24.989501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.989636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:14.331 [2024-07-15 17:34:24.989692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.989767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:14.331 [2024-07-15 17:34:24.989800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.989880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.989904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:14.331 [2024-07-15 17:34:24.989918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.989930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.990005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.331 [2024-07-15 17:34:24.990025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:14.331 [2024-07-15 17:34:24.990038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.331 [2024-07-15 17:34:24.990050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.331 [2024-07-15 17:34:24.990280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 72.246 ms, result 0 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.589 Remove shared memory files 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid98290 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:14.589 00:28:14.589 real 1m23.909s 00:28:14.589 user 1m54.090s 00:28:14.589 sys 0m26.263s 00:28:14.589 ************************************ 00:28:14.589 END TEST ftl_upgrade_shutdown 00:28:14.589 ************************************ 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.589 17:34:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.589 17:34:25 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:14.589 17:34:25 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:14.589 17:34:25 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:14.589 17:34:25 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:28:14.589 17:34:25 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.589 17:34:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:14.589 ************************************ 00:28:14.589 START TEST ftl_restore_fast 00:28:14.589 ************************************ 00:28:14.589 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:14.847 * Looking for test storage... 00:28:14.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.kz8H4fhV1T 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=98761 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 98761 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 98761 ']' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.847 17:34:25 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:14.847 [2024-07-15 17:34:25.678151] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:14.847 [2024-07-15 17:34:25.678347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98761 ] 00:28:15.105 [2024-07-15 17:34:25.834178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:15.105 [2024-07-15 17:34:25.855430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.363 [2024-07-15 17:34:25.997102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:15.929 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:16.186 17:34:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:16.443 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:16.443 { 00:28:16.443 "name": "nvme0n1", 00:28:16.443 "aliases": [ 00:28:16.443 "fd6a3596-3095-4ae9-98e2-9f2c0981890e" 00:28:16.443 ], 00:28:16.443 "product_name": "NVMe disk", 00:28:16.443 "block_size": 4096, 00:28:16.444 "num_blocks": 1310720, 00:28:16.444 "uuid": "fd6a3596-3095-4ae9-98e2-9f2c0981890e", 00:28:16.444 "assigned_rate_limits": { 00:28:16.444 "rw_ios_per_sec": 0, 00:28:16.444 "rw_mbytes_per_sec": 0, 00:28:16.444 "r_mbytes_per_sec": 0, 00:28:16.444 "w_mbytes_per_sec": 0 00:28:16.444 }, 00:28:16.444 "claimed": true, 00:28:16.444 "claim_type": "read_many_write_one", 00:28:16.444 "zoned": false, 00:28:16.444 "supported_io_types": { 00:28:16.444 "read": true, 00:28:16.444 "write": true, 00:28:16.444 "unmap": true, 00:28:16.444 "flush": true, 00:28:16.444 "reset": true, 00:28:16.444 "nvme_admin": true, 00:28:16.444 "nvme_io": true, 00:28:16.444 "nvme_io_md": false, 00:28:16.444 "write_zeroes": true, 00:28:16.444 "zcopy": false, 00:28:16.444 "get_zone_info": false, 00:28:16.444 "zone_management": false, 00:28:16.444 "zone_append": false, 00:28:16.444 "compare": true, 00:28:16.444 "compare_and_write": false, 00:28:16.444 "abort": true, 00:28:16.444 "seek_hole": false, 00:28:16.444 "seek_data": false, 00:28:16.444 "copy": true, 00:28:16.444 "nvme_iov_md": false 00:28:16.444 }, 00:28:16.444 "driver_specific": { 00:28:16.444 "nvme": [ 00:28:16.444 { 00:28:16.444 "pci_address": "0000:00:11.0", 00:28:16.444 "trid": { 00:28:16.444 "trtype": "PCIe", 00:28:16.444 "traddr": "0000:00:11.0" 00:28:16.444 }, 00:28:16.444 "ctrlr_data": { 00:28:16.444 "cntlid": 0, 00:28:16.444 "vendor_id": "0x1b36", 00:28:16.444 "model_number": "QEMU NVMe Ctrl", 00:28:16.444 "serial_number": "12341", 00:28:16.444 "firmware_revision": "8.0.0", 00:28:16.444 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:16.444 "oacs": { 00:28:16.444 "security": 0, 00:28:16.444 "format": 1, 00:28:16.444 "firmware": 0, 00:28:16.444 "ns_manage": 1 00:28:16.444 }, 00:28:16.444 "multi_ctrlr": false, 00:28:16.444 "ana_reporting": false 00:28:16.444 }, 00:28:16.444 "vs": { 00:28:16.444 "nvme_version": "1.4" 00:28:16.444 }, 00:28:16.444 "ns_data": { 00:28:16.444 "id": 1, 00:28:16.444 "can_share": false 00:28:16.444 } 00:28:16.444 } 00:28:16.444 ], 00:28:16.444 "mp_policy": "active_passive" 00:28:16.444 } 00:28:16.444 } 00:28:16.444 ]' 00:28:16.444 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:16.444 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:16.444 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.702 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:16.959 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=cb6e3d0a-2770-48b0-9a7c-2f300d00a904 00:28:16.959 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:16.959 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb6e3d0a-2770-48b0-9a7c-2f300d00a904 00:28:16.959 17:34:27 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:17.524 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=33d78f42-51ff-4500-b168-bc06c12da903 00:28:17.524 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33d78f42-51ff-4500-b168-bc06c12da903 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:17.781 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:18.040 { 00:28:18.040 "name": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:18.040 "aliases": [ 00:28:18.040 "lvs/nvme0n1p0" 00:28:18.040 ], 00:28:18.040 "product_name": "Logical Volume", 00:28:18.040 "block_size": 4096, 00:28:18.040 "num_blocks": 26476544, 00:28:18.040 "uuid": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:18.040 "assigned_rate_limits": { 00:28:18.040 "rw_ios_per_sec": 0, 00:28:18.040 "rw_mbytes_per_sec": 0, 00:28:18.040 "r_mbytes_per_sec": 0, 00:28:18.040 "w_mbytes_per_sec": 0 00:28:18.040 }, 00:28:18.040 "claimed": false, 00:28:18.040 "zoned": false, 00:28:18.040 "supported_io_types": { 00:28:18.040 "read": true, 00:28:18.040 "write": true, 00:28:18.040 "unmap": true, 00:28:18.040 "flush": false, 00:28:18.040 "reset": true, 00:28:18.040 "nvme_admin": false, 00:28:18.040 "nvme_io": false, 00:28:18.040 "nvme_io_md": false, 00:28:18.040 "write_zeroes": true, 00:28:18.040 "zcopy": false, 00:28:18.040 "get_zone_info": false, 00:28:18.040 "zone_management": false, 00:28:18.040 "zone_append": false, 00:28:18.040 "compare": false, 00:28:18.040 "compare_and_write": false, 00:28:18.040 "abort": false, 00:28:18.040 "seek_hole": true, 00:28:18.040 "seek_data": true, 00:28:18.040 "copy": false, 00:28:18.040 "nvme_iov_md": false 00:28:18.040 }, 00:28:18.040 "driver_specific": { 00:28:18.040 "lvol": { 00:28:18.040 "lvol_store_uuid": "33d78f42-51ff-4500-b168-bc06c12da903", 00:28:18.040 "base_bdev": "nvme0n1", 00:28:18.040 "thin_provision": true, 00:28:18.040 "num_allocated_clusters": 0, 00:28:18.040 "snapshot": false, 00:28:18.040 "clone": false, 00:28:18.040 "esnap_clone": false 00:28:18.040 } 00:28:18.040 } 00:28:18.040 } 00:28:18.040 ]' 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:18.040 17:34:28 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:18.297 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:18.861 { 00:28:18.861 "name": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:18.861 "aliases": [ 00:28:18.861 "lvs/nvme0n1p0" 00:28:18.861 ], 00:28:18.861 "product_name": "Logical Volume", 00:28:18.861 "block_size": 4096, 00:28:18.861 "num_blocks": 26476544, 00:28:18.861 "uuid": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:18.861 "assigned_rate_limits": { 00:28:18.861 "rw_ios_per_sec": 0, 00:28:18.861 "rw_mbytes_per_sec": 0, 00:28:18.861 "r_mbytes_per_sec": 0, 00:28:18.861 "w_mbytes_per_sec": 0 00:28:18.861 }, 00:28:18.861 "claimed": false, 00:28:18.861 "zoned": false, 00:28:18.861 "supported_io_types": { 00:28:18.861 "read": true, 00:28:18.861 "write": true, 00:28:18.861 "unmap": true, 00:28:18.861 "flush": false, 00:28:18.861 "reset": true, 00:28:18.861 "nvme_admin": false, 00:28:18.861 "nvme_io": false, 00:28:18.861 "nvme_io_md": false, 00:28:18.861 "write_zeroes": true, 00:28:18.861 "zcopy": false, 00:28:18.861 "get_zone_info": false, 00:28:18.861 "zone_management": false, 00:28:18.861 "zone_append": false, 00:28:18.861 "compare": false, 00:28:18.861 "compare_and_write": false, 00:28:18.861 "abort": false, 00:28:18.861 "seek_hole": true, 00:28:18.861 "seek_data": true, 00:28:18.861 "copy": false, 00:28:18.861 "nvme_iov_md": false 00:28:18.861 }, 00:28:18.861 "driver_specific": { 00:28:18.861 "lvol": { 00:28:18.861 "lvol_store_uuid": "33d78f42-51ff-4500-b168-bc06c12da903", 00:28:18.861 "base_bdev": "nvme0n1", 00:28:18.861 "thin_provision": true, 00:28:18.861 "num_allocated_clusters": 0, 00:28:18.861 "snapshot": false, 00:28:18.861 "clone": false, 00:28:18.861 "esnap_clone": false 00:28:18.861 } 00:28:18.861 } 00:28:18.861 } 00:28:18.861 ]' 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:18.861 17:34:29 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:19.118 17:34:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:19.376 { 00:28:19.376 "name": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:19.376 "aliases": [ 00:28:19.376 "lvs/nvme0n1p0" 00:28:19.376 ], 00:28:19.376 "product_name": "Logical Volume", 00:28:19.376 "block_size": 4096, 00:28:19.376 "num_blocks": 26476544, 00:28:19.376 "uuid": "74ac3d10-c3b0-4d30-97ab-d898f4dd5f52", 00:28:19.376 "assigned_rate_limits": { 00:28:19.376 "rw_ios_per_sec": 0, 00:28:19.376 "rw_mbytes_per_sec": 0, 00:28:19.376 "r_mbytes_per_sec": 0, 00:28:19.376 "w_mbytes_per_sec": 0 00:28:19.376 }, 00:28:19.376 "claimed": false, 00:28:19.376 "zoned": false, 00:28:19.376 "supported_io_types": { 00:28:19.376 "read": true, 00:28:19.376 "write": true, 00:28:19.376 "unmap": true, 00:28:19.376 "flush": false, 00:28:19.376 "reset": true, 00:28:19.376 "nvme_admin": false, 00:28:19.376 "nvme_io": false, 00:28:19.376 "nvme_io_md": false, 00:28:19.376 "write_zeroes": true, 00:28:19.376 "zcopy": false, 00:28:19.376 "get_zone_info": false, 00:28:19.376 "zone_management": false, 00:28:19.376 "zone_append": false, 00:28:19.376 "compare": false, 00:28:19.376 "compare_and_write": false, 00:28:19.376 "abort": false, 00:28:19.376 "seek_hole": true, 00:28:19.376 "seek_data": true, 00:28:19.376 "copy": false, 00:28:19.376 "nvme_iov_md": false 00:28:19.376 }, 00:28:19.376 "driver_specific": { 00:28:19.376 "lvol": { 00:28:19.376 "lvol_store_uuid": "33d78f42-51ff-4500-b168-bc06c12da903", 00:28:19.376 "base_bdev": "nvme0n1", 00:28:19.376 "thin_provision": true, 00:28:19.376 "num_allocated_clusters": 0, 00:28:19.376 "snapshot": false, 00:28:19.376 "clone": false, 00:28:19.376 "esnap_clone": false 00:28:19.376 } 00:28:19.376 } 00:28:19.376 } 00:28:19.376 ]' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 --l2p_dram_limit 10' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:19.376 17:34:30 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74ac3d10-c3b0-4d30-97ab-d898f4dd5f52 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:19.635 [2024-07-15 17:34:30.456498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.456595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.635 [2024-07-15 17:34:30.456634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:19.635 [2024-07-15 17:34:30.456654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.456779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.456814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.635 [2024-07-15 17:34:30.456832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:19.635 [2024-07-15 17:34:30.456854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.456907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.635 [2024-07-15 17:34:30.457434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.635 [2024-07-15 17:34:30.457469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.457489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.635 [2024-07-15 17:34:30.457516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:28:19.635 [2024-07-15 17:34:30.457535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.457707] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4a1ed47d-2b99-4656-ab93-ac889817c969 00:28:19.635 [2024-07-15 17:34:30.460397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.460444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:19.635 [2024-07-15 17:34:30.460468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:19.635 [2024-07-15 17:34:30.460494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.475599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.475701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.635 [2024-07-15 17:34:30.475731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.976 ms 00:28:19.635 [2024-07-15 17:34:30.475748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.475985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.476016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.635 [2024-07-15 17:34:30.476039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:28:19.635 [2024-07-15 17:34:30.476055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.635 [2024-07-15 17:34:30.476208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.635 [2024-07-15 17:34:30.476230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.635 [2024-07-15 17:34:30.476251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:19.636 [2024-07-15 17:34:30.476278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.636 [2024-07-15 17:34:30.476338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.636 [2024-07-15 17:34:30.479769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.636 [2024-07-15 17:34:30.479825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.636 [2024-07-15 17:34:30.479845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:28:19.636 [2024-07-15 17:34:30.479864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.636 [2024-07-15 17:34:30.479927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.636 [2024-07-15 17:34:30.479961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.636 [2024-07-15 17:34:30.479978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:19.636 [2024-07-15 17:34:30.480000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.636 [2024-07-15 17:34:30.480064] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:19.636 [2024-07-15 17:34:30.480319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.636 [2024-07-15 17:34:30.480346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.636 [2024-07-15 17:34:30.480409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:19.636 [2024-07-15 17:34:30.480432] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.636 [2024-07-15 17:34:30.480454] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.636 [2024-07-15 17:34:30.480475] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.636 [2024-07-15 17:34:30.480494] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.636 [2024-07-15 17:34:30.480510] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.636 [2024-07-15 17:34:30.480528] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.636 [2024-07-15 17:34:30.480545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.636 [2024-07-15 17:34:30.480563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.636 [2024-07-15 17:34:30.480591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:28:19.636 [2024-07-15 17:34:30.480610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.636 [2024-07-15 17:34:30.480732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.636 [2024-07-15 17:34:30.480773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.636 [2024-07-15 17:34:30.480798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:19.636 [2024-07-15 17:34:30.480833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.636 [2024-07-15 17:34:30.480979] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.636 [2024-07-15 17:34:30.481021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.636 [2024-07-15 17:34:30.481039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.636 [2024-07-15 17:34:30.481104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.636 [2024-07-15 17:34:30.481150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.636 [2024-07-15 17:34:30.481181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.636 [2024-07-15 17:34:30.481198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.636 [2024-07-15 17:34:30.481211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.636 [2024-07-15 17:34:30.481232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.636 [2024-07-15 17:34:30.481246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.636 [2024-07-15 17:34:30.481263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.636 [2024-07-15 17:34:30.481294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.636 [2024-07-15 17:34:30.481342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.636 [2024-07-15 17:34:30.481424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.636 [2024-07-15 17:34:30.481469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.636 [2024-07-15 17:34:30.481527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.636 [2024-07-15 17:34:30.481571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.636 [2024-07-15 17:34:30.481603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.636 [2024-07-15 17:34:30.481621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.636 [2024-07-15 17:34:30.481634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.636 [2024-07-15 17:34:30.481662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.636 [2024-07-15 17:34:30.481677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.636 [2024-07-15 17:34:30.481694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.636 [2024-07-15 17:34:30.481725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.636 [2024-07-15 17:34:30.481739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481755] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.636 [2024-07-15 17:34:30.481771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.636 [2024-07-15 17:34:30.481794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.636 [2024-07-15 17:34:30.481832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.636 [2024-07-15 17:34:30.481846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.636 [2024-07-15 17:34:30.481863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.636 [2024-07-15 17:34:30.481877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.636 [2024-07-15 17:34:30.481894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.636 [2024-07-15 17:34:30.481909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.636 [2024-07-15 17:34:30.481933] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.636 [2024-07-15 17:34:30.481953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.636 [2024-07-15 17:34:30.481973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.636 [2024-07-15 17:34:30.481989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.636 [2024-07-15 17:34:30.482008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.636 [2024-07-15 17:34:30.482024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.636 [2024-07-15 17:34:30.482042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.636 [2024-07-15 17:34:30.482057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.636 [2024-07-15 17:34:30.482078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.636 [2024-07-15 17:34:30.482093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.636 [2024-07-15 17:34:30.482112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.636 [2024-07-15 17:34:30.482126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.636 [2024-07-15 17:34:30.482145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.636 [2024-07-15 17:34:30.482160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.636 [2024-07-15 17:34:30.482179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.636 [2024-07-15 17:34:30.482194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.636 [2024-07-15 17:34:30.482215] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.637 [2024-07-15 17:34:30.482231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.637 [2024-07-15 17:34:30.482263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.637 [2024-07-15 17:34:30.482280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.637 [2024-07-15 17:34:30.482299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.637 [2024-07-15 17:34:30.482314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.637 [2024-07-15 17:34:30.482333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.637 [2024-07-15 17:34:30.482348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.637 [2024-07-15 17:34:30.482386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:28:19.637 [2024-07-15 17:34:30.482403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.637 [2024-07-15 17:34:30.482483] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:19.637 [2024-07-15 17:34:30.482503] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:22.919 [2024-07-15 17:34:33.318868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.319196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:22.919 [2024-07-15 17:34:33.319343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2836.380 ms 00:28:22.919 [2024-07-15 17:34:33.319416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.340609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.340949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.919 [2024-07-15 17:34:33.341132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.879 ms 00:28:22.919 [2024-07-15 17:34:33.341191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.341514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.341651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:22.919 [2024-07-15 17:34:33.341773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:22.919 [2024-07-15 17:34:33.341899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.359829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.360165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.919 [2024-07-15 17:34:33.360306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.777 ms 00:28:22.919 [2024-07-15 17:34:33.360374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.360496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.360611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.919 [2024-07-15 17:34:33.360674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:22.919 [2024-07-15 17:34:33.360724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.361662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.361798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.919 [2024-07-15 17:34:33.361909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:28:22.919 [2024-07-15 17:34:33.361957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.362186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.362234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.919 [2024-07-15 17:34:33.362256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:28:22.919 [2024-07-15 17:34:33.362268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.375296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.375353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.919 [2024-07-15 17:34:33.375391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:28:22.919 [2024-07-15 17:34:33.375405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.386939] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:22.919 [2024-07-15 17:34:33.392591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.392629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:22.919 [2024-07-15 17:34:33.392649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.044 ms 00:28:22.919 [2024-07-15 17:34:33.392665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.474030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.474142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:22.919 [2024-07-15 17:34:33.474166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.305 ms 00:28:22.919 [2024-07-15 17:34:33.474186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.474469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.474496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:22.919 [2024-07-15 17:34:33.474511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:28:22.919 [2024-07-15 17:34:33.474526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.478455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.478502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:22.919 [2024-07-15 17:34:33.478523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:28:22.919 [2024-07-15 17:34:33.478540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.481822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.481872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:22.919 [2024-07-15 17:34:33.481890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:28:22.919 [2024-07-15 17:34:33.481905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.482430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.482461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:22.919 [2024-07-15 17:34:33.482477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:28:22.919 [2024-07-15 17:34:33.482495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.526204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.526300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:22.919 [2024-07-15 17:34:33.526327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.673 ms 00:28:22.919 [2024-07-15 17:34:33.526343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.919 [2024-07-15 17:34:33.532279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.919 [2024-07-15 17:34:33.532325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:22.920 [2024-07-15 17:34:33.532344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.861 ms 00:28:22.920 [2024-07-15 17:34:33.532373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.920 [2024-07-15 17:34:33.536275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.920 [2024-07-15 17:34:33.536327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:22.920 [2024-07-15 17:34:33.536345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:28:22.920 [2024-07-15 17:34:33.536373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.920 [2024-07-15 17:34:33.540331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.920 [2024-07-15 17:34:33.540393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:22.920 [2024-07-15 17:34:33.540413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.907 ms 00:28:22.920 [2024-07-15 17:34:33.540431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.920 [2024-07-15 17:34:33.540497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.920 [2024-07-15 17:34:33.540523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:22.920 [2024-07-15 17:34:33.540537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:22.920 [2024-07-15 17:34:33.540553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.920 [2024-07-15 17:34:33.540647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.920 [2024-07-15 17:34:33.540667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:22.920 [2024-07-15 17:34:33.540681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:22.920 [2024-07-15 17:34:33.540700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.920 [2024-07-15 17:34:33.542486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3085.412 ms, result 0 00:28:22.920 { 00:28:22.920 "name": "ftl0", 00:28:22.920 "uuid": "4a1ed47d-2b99-4656-ab93-ac889817c969" 00:28:22.920 } 00:28:22.920 17:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:22.920 17:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:23.178 17:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:23.178 17:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:23.439 [2024-07-15 17:34:34.160491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.160587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.439 [2024-07-15 17:34:34.160615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:23.439 [2024-07-15 17:34:34.160628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.160670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.439 [2024-07-15 17:34:34.161954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.161990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.439 [2024-07-15 17:34:34.162005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:28:23.439 [2024-07-15 17:34:34.162020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.162341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.162376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.439 [2024-07-15 17:34:34.162395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:28:23.439 [2024-07-15 17:34:34.162410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.165735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.165766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.439 [2024-07-15 17:34:34.165781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:28:23.439 [2024-07-15 17:34:34.165796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.172421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.172471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.439 [2024-07-15 17:34:34.172485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.599 ms 00:28:23.439 [2024-07-15 17:34:34.172503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.173988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.174035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.439 [2024-07-15 17:34:34.174050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:28:23.439 [2024-07-15 17:34:34.174064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.179004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.179065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.439 [2024-07-15 17:34:34.179084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:28:23.439 [2024-07-15 17:34:34.179102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.179266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.179293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.439 [2024-07-15 17:34:34.179307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:28:23.439 [2024-07-15 17:34:34.179411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.181497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.181538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:23.439 [2024-07-15 17:34:34.181555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:28:23.439 [2024-07-15 17:34:34.181570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.183191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.183235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:23.439 [2024-07-15 17:34:34.183249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:28:23.439 [2024-07-15 17:34:34.183263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.184520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.184557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.439 [2024-07-15 17:34:34.184573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:28:23.439 [2024-07-15 17:34:34.184586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.185998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.439 [2024-07-15 17:34:34.186038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.439 [2024-07-15 17:34:34.186054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:28:23.439 [2024-07-15 17:34:34.186068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.439 [2024-07-15 17:34:34.186109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.439 [2024-07-15 17:34:34.186141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.439 [2024-07-15 17:34:34.186900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.186991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.440 [2024-07-15 17:34:34.187680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.440 [2024-07-15 17:34:34.187693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1ed47d-2b99-4656-ab93-ac889817c969 00:28:23.440 [2024-07-15 17:34:34.187710] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:23.440 [2024-07-15 17:34:34.187722] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:23.440 [2024-07-15 17:34:34.187738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:23.440 [2024-07-15 17:34:34.187752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:23.440 [2024-07-15 17:34:34.187769] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.440 [2024-07-15 17:34:34.187792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.440 [2024-07-15 17:34:34.187808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.440 [2024-07-15 17:34:34.187818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.440 [2024-07-15 17:34:34.187831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.440 [2024-07-15 17:34:34.187843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.440 [2024-07-15 17:34:34.187867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.440 [2024-07-15 17:34:34.187880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:28:23.440 [2024-07-15 17:34:34.187894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.190856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.440 [2024-07-15 17:34:34.190901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.440 [2024-07-15 17:34:34.190919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:28:23.440 [2024-07-15 17:34:34.190935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.191120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.440 [2024-07-15 17:34:34.191139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.440 [2024-07-15 17:34:34.191153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:28:23.440 [2024-07-15 17:34:34.191167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.202706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.202778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.440 [2024-07-15 17:34:34.202800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.202816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.202918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.202937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.440 [2024-07-15 17:34:34.202950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.202965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.203094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.203122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.440 [2024-07-15 17:34:34.203136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.203154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.203183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.203200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.440 [2024-07-15 17:34:34.203212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.203227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.229746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.229847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.440 [2024-07-15 17:34:34.229873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.229888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.244164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.244259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.440 [2024-07-15 17:34:34.244280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.244296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.244455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.244484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.440 [2024-07-15 17:34:34.244500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.244515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.244590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.244612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.440 [2024-07-15 17:34:34.244626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.244641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.440 [2024-07-15 17:34:34.244766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.440 [2024-07-15 17:34:34.244791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.440 [2024-07-15 17:34:34.244805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.440 [2024-07-15 17:34:34.244820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.441 [2024-07-15 17:34:34.244878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.441 [2024-07-15 17:34:34.244903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.441 [2024-07-15 17:34:34.244916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.441 [2024-07-15 17:34:34.244930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.441 [2024-07-15 17:34:34.244995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.441 [2024-07-15 17:34:34.245020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.441 [2024-07-15 17:34:34.245033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.441 [2024-07-15 17:34:34.245048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.441 [2024-07-15 17:34:34.245121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.441 [2024-07-15 17:34:34.245141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.441 [2024-07-15 17:34:34.245155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.441 [2024-07-15 17:34:34.245169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.441 [2024-07-15 17:34:34.245372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.830 ms, result 0 00:28:23.441 true 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 98761 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 98761 ']' 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 98761 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.441 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98761 00:28:23.699 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.699 killing process with pid 98761 00:28:23.699 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.699 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98761' 00:28:23.699 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 98761 00:28:23.699 17:34:34 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 98761 00:28:26.995 17:34:37 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:32.255 262144+0 records in 00:28:32.255 262144+0 records out 00:28:32.255 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.82579 s, 223 MB/s 00:28:32.255 17:34:42 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:34.155 17:34:44 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.155 [2024-07-15 17:34:44.663339] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:34.155 [2024-07-15 17:34:44.663566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98978 ] 00:28:34.155 [2024-07-15 17:34:44.816093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:34.155 [2024-07-15 17:34:44.838717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.155 [2024-07-15 17:34:44.945337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.414 [2024-07-15 17:34:45.079873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:34.414 [2024-07-15 17:34:45.079984] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:34.414 [2024-07-15 17:34:45.242912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.242992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:34.414 [2024-07-15 17:34:45.243015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:34.414 [2024-07-15 17:34:45.243027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.243129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.243151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:34.414 [2024-07-15 17:34:45.243169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:34.414 [2024-07-15 17:34:45.243180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.243213] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:34.414 [2024-07-15 17:34:45.243694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:34.414 [2024-07-15 17:34:45.243732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.243746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:34.414 [2024-07-15 17:34:45.243773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:28:34.414 [2024-07-15 17:34:45.243785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.245907] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:34.414 [2024-07-15 17:34:45.248927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.248977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:34.414 [2024-07-15 17:34:45.249001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:28:34.414 [2024-07-15 17:34:45.249022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.249114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.249144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:34.414 [2024-07-15 17:34:45.249158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:34.414 [2024-07-15 17:34:45.249182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.258445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.258525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:34.414 [2024-07-15 17:34:45.258544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.174 ms 00:28:34.414 [2024-07-15 17:34:45.258570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.258690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.258710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:34.414 [2024-07-15 17:34:45.258728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:34.414 [2024-07-15 17:34:45.258740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.258846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.258873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:34.414 [2024-07-15 17:34:45.258887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:34.414 [2024-07-15 17:34:45.258898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.258936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:34.414 [2024-07-15 17:34:45.261109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.261145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:34.414 [2024-07-15 17:34:45.261160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:28:34.414 [2024-07-15 17:34:45.261172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.261223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.261239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:34.414 [2024-07-15 17:34:45.261252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:34.414 [2024-07-15 17:34:45.261275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.261320] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:34.414 [2024-07-15 17:34:45.261355] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:34.414 [2024-07-15 17:34:45.261451] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:34.414 [2024-07-15 17:34:45.261486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:34.414 [2024-07-15 17:34:45.261594] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:34.414 [2024-07-15 17:34:45.261609] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:34.414 [2024-07-15 17:34:45.261624] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:34.414 [2024-07-15 17:34:45.261639] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:34.414 [2024-07-15 17:34:45.261653] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:34.414 [2024-07-15 17:34:45.261666] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:34.414 [2024-07-15 17:34:45.261677] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:34.414 [2024-07-15 17:34:45.261700] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:34.414 [2024-07-15 17:34:45.261720] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:34.414 [2024-07-15 17:34:45.261732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.261758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:34.414 [2024-07-15 17:34:45.261770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:28:34.414 [2024-07-15 17:34:45.261780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.261889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.414 [2024-07-15 17:34:45.261908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:34.414 [2024-07-15 17:34:45.261921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:34.414 [2024-07-15 17:34:45.261932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.414 [2024-07-15 17:34:45.262048] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:34.414 [2024-07-15 17:34:45.262075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:34.414 [2024-07-15 17:34:45.262095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:34.414 [2024-07-15 17:34:45.262107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.414 [2024-07-15 17:34:45.262127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:34.414 [2024-07-15 17:34:45.262138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:34.414 [2024-07-15 17:34:45.262150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:34.414 [2024-07-15 17:34:45.262160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:34.414 [2024-07-15 17:34:45.262172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:34.414 [2024-07-15 17:34:45.262182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:34.414 [2024-07-15 17:34:45.262193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:34.414 [2024-07-15 17:34:45.262203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:34.414 [2024-07-15 17:34:45.262218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:34.414 [2024-07-15 17:34:45.262229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:34.414 [2024-07-15 17:34:45.262240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:34.414 [2024-07-15 17:34:45.262262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:34.415 [2024-07-15 17:34:45.262284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:34.415 [2024-07-15 17:34:45.262316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:34.415 [2024-07-15 17:34:45.262347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:34.415 [2024-07-15 17:34:45.262404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:34.415 [2024-07-15 17:34:45.262445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:34.415 [2024-07-15 17:34:45.262477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:34.415 [2024-07-15 17:34:45.262498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:34.415 [2024-07-15 17:34:45.262515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:34.415 [2024-07-15 17:34:45.262529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:34.415 [2024-07-15 17:34:45.262546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:34.415 [2024-07-15 17:34:45.262557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:34.415 [2024-07-15 17:34:45.262568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:34.415 [2024-07-15 17:34:45.262589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:34.415 [2024-07-15 17:34:45.262599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262609] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:34.415 [2024-07-15 17:34:45.262624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:34.415 [2024-07-15 17:34:45.262657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.415 [2024-07-15 17:34:45.262692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:34.415 [2024-07-15 17:34:45.262709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:34.415 [2024-07-15 17:34:45.262724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:34.415 [2024-07-15 17:34:45.262736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:34.415 [2024-07-15 17:34:45.262746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:34.415 [2024-07-15 17:34:45.262757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:34.415 [2024-07-15 17:34:45.262769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:34.415 [2024-07-15 17:34:45.262783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:34.415 [2024-07-15 17:34:45.262807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:34.415 [2024-07-15 17:34:45.262818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:34.415 [2024-07-15 17:34:45.262830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:34.415 [2024-07-15 17:34:45.262841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:34.415 [2024-07-15 17:34:45.262857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:34.415 [2024-07-15 17:34:45.262869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:34.415 [2024-07-15 17:34:45.262881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:34.415 [2024-07-15 17:34:45.262892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:34.415 [2024-07-15 17:34:45.262903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:34.415 [2024-07-15 17:34:45.262966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:34.415 [2024-07-15 17:34:45.262980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.262997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:34.415 [2024-07-15 17:34:45.263009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:34.415 [2024-07-15 17:34:45.263021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:34.415 [2024-07-15 17:34:45.263032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:34.415 [2024-07-15 17:34:45.263045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.415 [2024-07-15 17:34:45.263064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:34.415 [2024-07-15 17:34:45.263083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:28:34.415 [2024-07-15 17:34:45.263098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.288780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.288851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.674 [2024-07-15 17:34:45.288890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.586 ms 00:28:34.674 [2024-07-15 17:34:45.288908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.289051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.289081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:34.674 [2024-07-15 17:34:45.289102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:34.674 [2024-07-15 17:34:45.289114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.302340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.302421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.674 [2024-07-15 17:34:45.302442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.120 ms 00:28:34.674 [2024-07-15 17:34:45.302456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.302538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.302564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.674 [2024-07-15 17:34:45.302578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:34.674 [2024-07-15 17:34:45.302589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.303240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.303281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.674 [2024-07-15 17:34:45.303299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:28:34.674 [2024-07-15 17:34:45.303310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.303537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.303563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.674 [2024-07-15 17:34:45.303583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:28:34.674 [2024-07-15 17:34:45.303594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.311603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.311661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.674 [2024-07-15 17:34:45.311687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.975 ms 00:28:34.674 [2024-07-15 17:34:45.311699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.314842] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:34.674 [2024-07-15 17:34:45.314894] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:34.674 [2024-07-15 17:34:45.314930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.314943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:34.674 [2024-07-15 17:34:45.314956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:28:34.674 [2024-07-15 17:34:45.314967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.330904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.330960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:34.674 [2024-07-15 17:34:45.330979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.884 ms 00:28:34.674 [2024-07-15 17:34:45.331015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.333319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.333373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:34.674 [2024-07-15 17:34:45.333403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:28:34.674 [2024-07-15 17:34:45.333419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.334982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.335022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:34.674 [2024-07-15 17:34:45.335038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:28:34.674 [2024-07-15 17:34:45.335048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.335567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.335606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:34.674 [2024-07-15 17:34:45.335621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:28:34.674 [2024-07-15 17:34:45.335638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.359767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.359855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:34.674 [2024-07-15 17:34:45.359876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.102 ms 00:28:34.674 [2024-07-15 17:34:45.359890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.368563] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:34.674 [2024-07-15 17:34:45.373652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.373712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:34.674 [2024-07-15 17:34:45.373742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.664 ms 00:28:34.674 [2024-07-15 17:34:45.373776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.373938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.373964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:34.674 [2024-07-15 17:34:45.373979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:34.674 [2024-07-15 17:34:45.373995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.374107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.374141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:34.674 [2024-07-15 17:34:45.374156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:34.674 [2024-07-15 17:34:45.374168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.374205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.374234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:34.674 [2024-07-15 17:34:45.374247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:34.674 [2024-07-15 17:34:45.374258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.374303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:34.674 [2024-07-15 17:34:45.374321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.374333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:34.674 [2024-07-15 17:34:45.374353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:34.674 [2024-07-15 17:34:45.374394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.378774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.378824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:34.674 [2024-07-15 17:34:45.378852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:28:34.674 [2024-07-15 17:34:45.378866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.378954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.674 [2024-07-15 17:34:45.378986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:34.674 [2024-07-15 17:34:45.379008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:34.674 [2024-07-15 17:34:45.379021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.674 [2024-07-15 17:34:45.380422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.958 ms, result 0 00:29:13.143  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (26 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 162/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 216/1024 [MB] (26 MBps) Copying: 244/1024 [MB] (27 MBps) Copying: 271/1024 [MB] (27 MBps) Copying: 299/1024 [MB] (27 MBps) Copying: 326/1024 [MB] (27 MBps) Copying: 353/1024 [MB] (26 MBps) Copying: 380/1024 [MB] (26 MBps) Copying: 405/1024 [MB] (25 MBps) Copying: 432/1024 [MB] (26 MBps) Copying: 459/1024 [MB] (27 MBps) Copying: 485/1024 [MB] (26 MBps) Copying: 508/1024 [MB] (23 MBps) Copying: 532/1024 [MB] (23 MBps) Copying: 557/1024 [MB] (24 MBps) Copying: 580/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (27 MBps) Copying: 635/1024 [MB] (27 MBps) Copying: 662/1024 [MB] (26 MBps) Copying: 686/1024 [MB] (24 MBps) Copying: 715/1024 [MB] (28 MBps) Copying: 742/1024 [MB] (26 MBps) Copying: 770/1024 [MB] (27 MBps) Copying: 798/1024 [MB] (27 MBps) Copying: 826/1024 [MB] (27 MBps) Copying: 854/1024 [MB] (28 MBps) Copying: 883/1024 [MB] (28 MBps) Copying: 910/1024 [MB] (26 MBps) Copying: 936/1024 [MB] (26 MBps) Copying: 962/1024 [MB] (26 MBps) Copying: 988/1024 [MB] (25 MBps) Copying: 1013/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 17:35:23.816453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.143 [2024-07-15 17:35:23.816550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:13.143 [2024-07-15 17:35:23.816574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:13.143 [2024-07-15 17:35:23.816599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.143 [2024-07-15 17:35:23.816633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:13.143 [2024-07-15 17:35:23.817964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.143 [2024-07-15 17:35:23.817995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:13.143 [2024-07-15 17:35:23.818010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:29:13.143 [2024-07-15 17:35:23.818022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.143 [2024-07-15 17:35:23.819978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.143 [2024-07-15 17:35:23.820021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:13.143 [2024-07-15 17:35:23.820047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:29:13.143 [2024-07-15 17:35:23.820060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.143 [2024-07-15 17:35:23.820096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.143 [2024-07-15 17:35:23.820112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:13.143 [2024-07-15 17:35:23.820126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:13.143 [2024-07-15 17:35:23.820138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.143 [2024-07-15 17:35:23.820203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.143 [2024-07-15 17:35:23.820218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:13.143 [2024-07-15 17:35:23.820231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:13.143 [2024-07-15 17:35:23.820248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.143 [2024-07-15 17:35:23.820268] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:13.143 [2024-07-15 17:35:23.820298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:13.143 [2024-07-15 17:35:23.820313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:13.143 [2024-07-15 17:35:23.820326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:13.143 [2024-07-15 17:35:23.820339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:13.143 [2024-07-15 17:35:23.820366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.820998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:13.144 [2024-07-15 17:35:23.821743] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:13.144 [2024-07-15 17:35:23.821755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1ed47d-2b99-4656-ab93-ac889817c969 00:29:13.144 [2024-07-15 17:35:23.821784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:13.144 [2024-07-15 17:35:23.821796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:13.144 [2024-07-15 17:35:23.821807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:13.144 [2024-07-15 17:35:23.821819] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:13.144 [2024-07-15 17:35:23.821830] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:13.144 [2024-07-15 17:35:23.821842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:13.144 [2024-07-15 17:35:23.821858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:13.144 [2024-07-15 17:35:23.821868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:13.144 [2024-07-15 17:35:23.821878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:13.144 [2024-07-15 17:35:23.821905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.144 [2024-07-15 17:35:23.821917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:13.144 [2024-07-15 17:35:23.821929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:29:13.144 [2024-07-15 17:35:23.821941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.825084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.144 [2024-07-15 17:35:23.825116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:13.144 [2024-07-15 17:35:23.825159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:29:13.144 [2024-07-15 17:35:23.825172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.825391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.144 [2024-07-15 17:35:23.825443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:13.144 [2024-07-15 17:35:23.825459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:29:13.144 [2024-07-15 17:35:23.825472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.835470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.835528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.144 [2024-07-15 17:35:23.835545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.835557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.835659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.835676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.144 [2024-07-15 17:35:23.835701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.835713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.835864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.835886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.144 [2024-07-15 17:35:23.835900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.835934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.835965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.835981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.144 [2024-07-15 17:35:23.836003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.836016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.857302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.857413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.144 [2024-07-15 17:35:23.857435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.857449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.872000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.872106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.144 [2024-07-15 17:35:23.872127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.144 [2024-07-15 17:35:23.872151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.144 [2024-07-15 17:35:23.872260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.144 [2024-07-15 17:35:23.872280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.145 [2024-07-15 17:35:23.872294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.872377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.145 [2024-07-15 17:35:23.872404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.145 [2024-07-15 17:35:23.872429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.872539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.145 [2024-07-15 17:35:23.872567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.145 [2024-07-15 17:35:23.872582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.872641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.145 [2024-07-15 17:35:23.872665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:13.145 [2024-07-15 17:35:23.872684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.872751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.145 [2024-07-15 17:35:23.872767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.145 [2024-07-15 17:35:23.872781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.872854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.145 [2024-07-15 17:35:23.872878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.145 [2024-07-15 17:35:23.872891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.145 [2024-07-15 17:35:23.872903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.145 [2024-07-15 17:35:23.873077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.573 ms, result 0 00:29:13.710 00:29:13.710 00:29:13.710 17:35:24 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:13.710 [2024-07-15 17:35:24.495958] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:29:13.710 [2024-07-15 17:35:24.496179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99360 ] 00:29:13.969 [2024-07-15 17:35:24.656388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:13.969 [2024-07-15 17:35:24.680768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.969 [2024-07-15 17:35:24.811397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.226 [2024-07-15 17:35:24.987585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.226 [2024-07-15 17:35:24.987681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:14.485 [2024-07-15 17:35:25.156578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.485 [2024-07-15 17:35:25.156696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:14.485 [2024-07-15 17:35:25.156719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:14.485 [2024-07-15 17:35:25.156745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.485 [2024-07-15 17:35:25.156857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.485 [2024-07-15 17:35:25.156917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.485 [2024-07-15 17:35:25.156936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:14.486 [2024-07-15 17:35:25.156947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.156995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:14.486 [2024-07-15 17:35:25.157475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:14.486 [2024-07-15 17:35:25.157514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.157528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.486 [2024-07-15 17:35:25.157547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:29:14.486 [2024-07-15 17:35:25.157560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.158131] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:14.486 [2024-07-15 17:35:25.158179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.158200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:14.486 [2024-07-15 17:35:25.158213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:14.486 [2024-07-15 17:35:25.158225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.158298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.158361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:14.486 [2024-07-15 17:35:25.158386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:14.486 [2024-07-15 17:35:25.158421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.158868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.158909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.486 [2024-07-15 17:35:25.158925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:29:14.486 [2024-07-15 17:35:25.158937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.159150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.159178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.486 [2024-07-15 17:35:25.159193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:29:14.486 [2024-07-15 17:35:25.159203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.159269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.159305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:14.486 [2024-07-15 17:35:25.159318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:14.486 [2024-07-15 17:35:25.159335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.159383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:14.486 [2024-07-15 17:35:25.162708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.162747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.486 [2024-07-15 17:35:25.162770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:29:14.486 [2024-07-15 17:35:25.162799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.162864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.162883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:14.486 [2024-07-15 17:35:25.162902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:14.486 [2024-07-15 17:35:25.162927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.163024] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:14.486 [2024-07-15 17:35:25.163061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:14.486 [2024-07-15 17:35:25.163113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:14.486 [2024-07-15 17:35:25.163138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:14.486 [2024-07-15 17:35:25.163254] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:14.486 [2024-07-15 17:35:25.163284] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:14.486 [2024-07-15 17:35:25.163333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:14.486 [2024-07-15 17:35:25.163365] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:14.486 [2024-07-15 17:35:25.163394] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:14.486 [2024-07-15 17:35:25.163441] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:14.486 [2024-07-15 17:35:25.163466] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:14.486 [2024-07-15 17:35:25.163478] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:14.486 [2024-07-15 17:35:25.163489] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:14.486 [2024-07-15 17:35:25.163502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.163513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:14.486 [2024-07-15 17:35:25.163526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:29:14.486 [2024-07-15 17:35:25.163543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.163640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.486 [2024-07-15 17:35:25.163673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:14.486 [2024-07-15 17:35:25.163685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:14.486 [2024-07-15 17:35:25.163708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.486 [2024-07-15 17:35:25.163813] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:14.486 [2024-07-15 17:35:25.163833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:14.486 [2024-07-15 17:35:25.163846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.486 [2024-07-15 17:35:25.163858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.163881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:14.486 [2024-07-15 17:35:25.163908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.163920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:14.486 [2024-07-15 17:35:25.163934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:14.486 [2024-07-15 17:35:25.163960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:14.486 [2024-07-15 17:35:25.163971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.486 [2024-07-15 17:35:25.163980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:14.486 [2024-07-15 17:35:25.163990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:14.486 [2024-07-15 17:35:25.164000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.486 [2024-07-15 17:35:25.164010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:14.486 [2024-07-15 17:35:25.164019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:14.486 [2024-07-15 17:35:25.164029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:14.486 [2024-07-15 17:35:25.164084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:14.486 [2024-07-15 17:35:25.164120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:14.486 [2024-07-15 17:35:25.164153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:14.486 [2024-07-15 17:35:25.164183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:14.486 [2024-07-15 17:35:25.164214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:14.486 [2024-07-15 17:35:25.164246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.486 [2024-07-15 17:35:25.164283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:14.486 [2024-07-15 17:35:25.164294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:14.486 [2024-07-15 17:35:25.164311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.486 [2024-07-15 17:35:25.164323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:14.486 [2024-07-15 17:35:25.164334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:14.486 [2024-07-15 17:35:25.164345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:14.486 [2024-07-15 17:35:25.164367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:14.486 [2024-07-15 17:35:25.164377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164388] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:14.486 [2024-07-15 17:35:25.164400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:14.486 [2024-07-15 17:35:25.164423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.486 [2024-07-15 17:35:25.164435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.486 [2024-07-15 17:35:25.164523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:14.486 [2024-07-15 17:35:25.164536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:14.486 [2024-07-15 17:35:25.164548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:14.486 [2024-07-15 17:35:25.164559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:14.486 [2024-07-15 17:35:25.164570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:14.486 [2024-07-15 17:35:25.164587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:14.487 [2024-07-15 17:35:25.164601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:14.487 [2024-07-15 17:35:25.164632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:14.487 [2024-07-15 17:35:25.164656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:14.487 [2024-07-15 17:35:25.164668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:14.487 [2024-07-15 17:35:25.164679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:14.487 [2024-07-15 17:35:25.164691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:14.487 [2024-07-15 17:35:25.164702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:14.487 [2024-07-15 17:35:25.164725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:14.487 [2024-07-15 17:35:25.164736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:14.487 [2024-07-15 17:35:25.164747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:14.487 [2024-07-15 17:35:25.164758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:14.487 [2024-07-15 17:35:25.164817] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:14.487 [2024-07-15 17:35:25.164830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.487 [2024-07-15 17:35:25.164854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:14.487 [2024-07-15 17:35:25.164865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:14.487 [2024-07-15 17:35:25.164876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:14.487 [2024-07-15 17:35:25.164888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.164899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:14.487 [2024-07-15 17:35:25.164922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:29:14.487 [2024-07-15 17:35:25.164938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.196487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.196576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:14.487 [2024-07-15 17:35:25.196603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.480 ms 00:29:14.487 [2024-07-15 17:35:25.196622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.196831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.196855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:14.487 [2024-07-15 17:35:25.196893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:14.487 [2024-07-15 17:35:25.196909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.214771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.214841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:14.487 [2024-07-15 17:35:25.214876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.725 ms 00:29:14.487 [2024-07-15 17:35:25.214891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.214977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.214995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:14.487 [2024-07-15 17:35:25.215009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:14.487 [2024-07-15 17:35:25.215029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.215206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.215236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:14.487 [2024-07-15 17:35:25.215252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:14.487 [2024-07-15 17:35:25.215264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.215483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.215512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:14.487 [2024-07-15 17:35:25.215527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:29:14.487 [2024-07-15 17:35:25.215539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.226117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.226167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:14.487 [2024-07-15 17:35:25.226185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.531 ms 00:29:14.487 [2024-07-15 17:35:25.226198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.226466] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:14.487 [2024-07-15 17:35:25.226513] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:14.487 [2024-07-15 17:35:25.226543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.226562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:14.487 [2024-07-15 17:35:25.226576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:29:14.487 [2024-07-15 17:35:25.226587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.240063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.240101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:14.487 [2024-07-15 17:35:25.240118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.450 ms 00:29:14.487 [2024-07-15 17:35:25.240130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.240286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.240306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:14.487 [2024-07-15 17:35:25.240320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:29:14.487 [2024-07-15 17:35:25.240337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.240436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.240457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:14.487 [2024-07-15 17:35:25.240471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:14.487 [2024-07-15 17:35:25.240483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.240875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.240905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:14.487 [2024-07-15 17:35:25.240920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:29:14.487 [2024-07-15 17:35:25.240932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.240978] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:14.487 [2024-07-15 17:35:25.240997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.241010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:14.487 [2024-07-15 17:35:25.241022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:14.487 [2024-07-15 17:35:25.241045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.251534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:14.487 [2024-07-15 17:35:25.251753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.251780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:14.487 [2024-07-15 17:35:25.251796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.679 ms 00:29:14.487 [2024-07-15 17:35:25.251819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.254538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.254574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:14.487 [2024-07-15 17:35:25.254601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:29:14.487 [2024-07-15 17:35:25.254624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.254753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.254781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:14.487 [2024-07-15 17:35:25.254806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:14.487 [2024-07-15 17:35:25.254818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.254856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.254873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:14.487 [2024-07-15 17:35:25.254905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:14.487 [2024-07-15 17:35:25.254917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.254965] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:14.487 [2024-07-15 17:35:25.254984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.255000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:14.487 [2024-07-15 17:35:25.255018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:14.487 [2024-07-15 17:35:25.255030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.260622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.260677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:14.487 [2024-07-15 17:35:25.260694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.561 ms 00:29:14.487 [2024-07-15 17:35:25.260706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.260795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.487 [2024-07-15 17:35:25.260816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:14.487 [2024-07-15 17:35:25.260830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:14.487 [2024-07-15 17:35:25.260852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.487 [2024-07-15 17:35:25.262591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.433 ms, result 0 00:29:54.614  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 100/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 151/1024 [MB] (26 MBps) Copying: 178/1024 [MB] (26 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 230/1024 [MB] (25 MBps) Copying: 257/1024 [MB] (27 MBps) Copying: 285/1024 [MB] (27 MBps) Copying: 313/1024 [MB] (27 MBps) Copying: 339/1024 [MB] (26 MBps) Copying: 365/1024 [MB] (26 MBps) Copying: 389/1024 [MB] (24 MBps) Copying: 416/1024 [MB] (26 MBps) Copying: 442/1024 [MB] (25 MBps) Copying: 466/1024 [MB] (24 MBps) Copying: 492/1024 [MB] (26 MBps) Copying: 517/1024 [MB] (24 MBps) Copying: 544/1024 [MB] (26 MBps) Copying: 569/1024 [MB] (25 MBps) Copying: 594/1024 [MB] (24 MBps) Copying: 619/1024 [MB] (24 MBps) Copying: 645/1024 [MB] (25 MBps) Copying: 672/1024 [MB] (26 MBps) Copying: 698/1024 [MB] (26 MBps) Copying: 724/1024 [MB] (26 MBps) Copying: 751/1024 [MB] (26 MBps) Copying: 777/1024 [MB] (26 MBps) Copying: 803/1024 [MB] (26 MBps) Copying: 830/1024 [MB] (26 MBps) Copying: 854/1024 [MB] (24 MBps) Copying: 878/1024 [MB] (23 MBps) Copying: 903/1024 [MB] (25 MBps) Copying: 928/1024 [MB] (25 MBps) Copying: 954/1024 [MB] (25 MBps) Copying: 979/1024 [MB] (25 MBps) Copying: 1005/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 17:36:05.336895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-07-15 17:36:05.337027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:54.614 [2024-07-15 17:36:05.337068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:54.614 [2024-07-15 17:36:05.337100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-07-15 17:36:05.337146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.614 [2024-07-15 17:36:05.338748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-07-15 17:36:05.338788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:54.614 [2024-07-15 17:36:05.338809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:29:54.614 [2024-07-15 17:36:05.338825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-07-15 17:36:05.339173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-07-15 17:36:05.339206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:54.614 [2024-07-15 17:36:05.339225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:29:54.614 [2024-07-15 17:36:05.339250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-07-15 17:36:05.339302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-07-15 17:36:05.339323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:54.614 [2024-07-15 17:36:05.339340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:54.614 [2024-07-15 17:36:05.339356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-07-15 17:36:05.339485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-07-15 17:36:05.339522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:54.614 [2024-07-15 17:36:05.339540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:54.614 [2024-07-15 17:36:05.339572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-07-15 17:36:05.339621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:54.614 [2024-07-15 17:36:05.339648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.339985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:54.614 [2024-07-15 17:36:05.340677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.340984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:54.615 [2024-07-15 17:36:05.341838] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:54.615 [2024-07-15 17:36:05.341874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1ed47d-2b99-4656-ab93-ac889817c969 00:29:54.615 [2024-07-15 17:36:05.341892] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:54.615 [2024-07-15 17:36:05.341925] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:54.615 [2024-07-15 17:36:05.341941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:54.615 [2024-07-15 17:36:05.341965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:54.615 [2024-07-15 17:36:05.341980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:54.615 [2024-07-15 17:36:05.341996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:54.615 [2024-07-15 17:36:05.342012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:54.615 [2024-07-15 17:36:05.342026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:54.615 [2024-07-15 17:36:05.342041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:54.615 [2024-07-15 17:36:05.342056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-07-15 17:36:05.342091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:54.615 [2024-07-15 17:36:05.342108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:29:54.615 [2024-07-15 17:36:05.342125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.345332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-07-15 17:36:05.345396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:54.615 [2024-07-15 17:36:05.345439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:29:54.615 [2024-07-15 17:36:05.345455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.345651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-07-15 17:36:05.345691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:54.615 [2024-07-15 17:36:05.345728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:29:54.615 [2024-07-15 17:36:05.345744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.355293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.355354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.615 [2024-07-15 17:36:05.355381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.355394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.355474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.355505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.615 [2024-07-15 17:36:05.355518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.355529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.355617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.355637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.615 [2024-07-15 17:36:05.355649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.355660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.355683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.355721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.615 [2024-07-15 17:36:05.355734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.355745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.375273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.375398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.615 [2024-07-15 17:36:05.375425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.375438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.390187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.390265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.615 [2024-07-15 17:36:05.390284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.390315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.390431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.615 [2024-07-15 17:36:05.390450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.615 [2024-07-15 17:36:05.390477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.615 [2024-07-15 17:36:05.390489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-07-15 17:36:05.390540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.616 [2024-07-15 17:36:05.390556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.616 [2024-07-15 17:36:05.390578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.616 [2024-07-15 17:36:05.390589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.616 [2024-07-15 17:36:05.390677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.616 [2024-07-15 17:36:05.390697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.616 [2024-07-15 17:36:05.390709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.616 [2024-07-15 17:36:05.390720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.616 [2024-07-15 17:36:05.390761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.616 [2024-07-15 17:36:05.390788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:54.616 [2024-07-15 17:36:05.390800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.616 [2024-07-15 17:36:05.390818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.616 [2024-07-15 17:36:05.390875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.616 [2024-07-15 17:36:05.390891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.616 [2024-07-15 17:36:05.390903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.616 [2024-07-15 17:36:05.390915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.616 [2024-07-15 17:36:05.390975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.616 [2024-07-15 17:36:05.390992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.616 [2024-07-15 17:36:05.391011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.616 [2024-07-15 17:36:05.391021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.616 [2024-07-15 17:36:05.391198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.271 ms, result 0 00:29:55.181 00:29:55.181 00:29:55.181 17:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:57.712 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:57.712 17:36:07 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:57.712 [2024-07-15 17:36:08.035099] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:29:57.712 [2024-07-15 17:36:08.035281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99777 ] 00:29:57.712 [2024-07-15 17:36:08.179135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:57.712 [2024-07-15 17:36:08.203300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.712 [2024-07-15 17:36:08.338216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.712 [2024-07-15 17:36:08.500210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.712 [2024-07-15 17:36:08.500312] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.971 [2024-07-15 17:36:08.665759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.665852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:57.971 [2024-07-15 17:36:08.665876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:57.971 [2024-07-15 17:36:08.665890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.665984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.666007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.971 [2024-07-15 17:36:08.666029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:57.971 [2024-07-15 17:36:08.666041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.666087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:57.971 [2024-07-15 17:36:08.666422] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:57.971 [2024-07-15 17:36:08.666457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.666477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.971 [2024-07-15 17:36:08.666496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:29:57.971 [2024-07-15 17:36:08.666511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.667068] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:57.971 [2024-07-15 17:36:08.667105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.667129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:57.971 [2024-07-15 17:36:08.667155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:57.971 [2024-07-15 17:36:08.667167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.667243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.667261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:57.971 [2024-07-15 17:36:08.667275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:57.971 [2024-07-15 17:36:08.667287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.667784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.667820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.971 [2024-07-15 17:36:08.667837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:29:57.971 [2024-07-15 17:36:08.667850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.667949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.667970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.971 [2024-07-15 17:36:08.667984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:57.971 [2024-07-15 17:36:08.668012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.668054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.668072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:57.971 [2024-07-15 17:36:08.668109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:57.971 [2024-07-15 17:36:08.668122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.668155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:57.971 [2024-07-15 17:36:08.671011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.671043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.971 [2024-07-15 17:36:08.671065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:29:57.971 [2024-07-15 17:36:08.671090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.671141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.671165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:57.971 [2024-07-15 17:36:08.671190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:57.971 [2024-07-15 17:36:08.671203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.671263] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:57.971 [2024-07-15 17:36:08.671302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:57.971 [2024-07-15 17:36:08.671347] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:57.971 [2024-07-15 17:36:08.671403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:57.971 [2024-07-15 17:36:08.671542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:57.971 [2024-07-15 17:36:08.671562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:57.971 [2024-07-15 17:36:08.671577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:57.971 [2024-07-15 17:36:08.671594] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:57.971 [2024-07-15 17:36:08.671617] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:57.971 [2024-07-15 17:36:08.671644] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:57.971 [2024-07-15 17:36:08.671661] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:57.971 [2024-07-15 17:36:08.671674] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:57.971 [2024-07-15 17:36:08.671685] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:57.971 [2024-07-15 17:36:08.671698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.671721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:57.971 [2024-07-15 17:36:08.671750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:29:57.971 [2024-07-15 17:36:08.671762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.671860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.971 [2024-07-15 17:36:08.671900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:57.971 [2024-07-15 17:36:08.671915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:57.971 [2024-07-15 17:36:08.671927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.971 [2024-07-15 17:36:08.672048] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:57.971 [2024-07-15 17:36:08.672069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:57.971 [2024-07-15 17:36:08.672097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.971 [2024-07-15 17:36:08.672116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:57.971 [2024-07-15 17:36:08.672150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:57.971 [2024-07-15 17:36:08.672174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:57.971 [2024-07-15 17:36:08.672186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.971 [2024-07-15 17:36:08.672208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:57.971 [2024-07-15 17:36:08.672219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:57.971 [2024-07-15 17:36:08.672230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.971 [2024-07-15 17:36:08.672241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:57.971 [2024-07-15 17:36:08.672252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:57.971 [2024-07-15 17:36:08.672263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:57.971 [2024-07-15 17:36:08.672299] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:57.971 [2024-07-15 17:36:08.672310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:57.971 [2024-07-15 17:36:08.672338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.971 [2024-07-15 17:36:08.672379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:57.971 [2024-07-15 17:36:08.672393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.971 [2024-07-15 17:36:08.672415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:57.971 [2024-07-15 17:36:08.672426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:57.971 [2024-07-15 17:36:08.672437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.972 [2024-07-15 17:36:08.672448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:57.972 [2024-07-15 17:36:08.672460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:57.972 [2024-07-15 17:36:08.672471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.972 [2024-07-15 17:36:08.672482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:57.972 [2024-07-15 17:36:08.672492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:57.972 [2024-07-15 17:36:08.672504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.972 [2024-07-15 17:36:08.672515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:57.972 [2024-07-15 17:36:08.672526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:57.972 [2024-07-15 17:36:08.672545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.972 [2024-07-15 17:36:08.672557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:57.972 [2024-07-15 17:36:08.672569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:57.972 [2024-07-15 17:36:08.672579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.972 [2024-07-15 17:36:08.672591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:57.972 [2024-07-15 17:36:08.672602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:57.972 [2024-07-15 17:36:08.672613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.972 [2024-07-15 17:36:08.672623] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:57.972 [2024-07-15 17:36:08.672634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:57.972 [2024-07-15 17:36:08.672646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.972 [2024-07-15 17:36:08.672658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.972 [2024-07-15 17:36:08.672670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:57.972 [2024-07-15 17:36:08.672681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:57.972 [2024-07-15 17:36:08.672692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:57.972 [2024-07-15 17:36:08.672704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:57.972 [2024-07-15 17:36:08.672715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:57.972 [2024-07-15 17:36:08.672730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:57.972 [2024-07-15 17:36:08.672744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:57.972 [2024-07-15 17:36:08.672759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:57.972 [2024-07-15 17:36:08.672785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:57.972 [2024-07-15 17:36:08.672797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:57.972 [2024-07-15 17:36:08.672808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:57.972 [2024-07-15 17:36:08.672821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:57.972 [2024-07-15 17:36:08.672832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:57.972 [2024-07-15 17:36:08.672844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:57.972 [2024-07-15 17:36:08.672856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:57.972 [2024-07-15 17:36:08.672868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:57.972 [2024-07-15 17:36:08.672880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:57.972 [2024-07-15 17:36:08.672944] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:57.972 [2024-07-15 17:36:08.672958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.972 [2024-07-15 17:36:08.672984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:57.972 [2024-07-15 17:36:08.672997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:57.972 [2024-07-15 17:36:08.673009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:57.972 [2024-07-15 17:36:08.673022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.673035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:57.972 [2024-07-15 17:36:08.673053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:29:57.972 [2024-07-15 17:36:08.673065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.698248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.698329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.972 [2024-07-15 17:36:08.698370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.113 ms 00:29:57.972 [2024-07-15 17:36:08.698387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.698550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.698569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:57.972 [2024-07-15 17:36:08.698584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:29:57.972 [2024-07-15 17:36:08.698605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.715628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.715703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.972 [2024-07-15 17:36:08.715737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.903 ms 00:29:57.972 [2024-07-15 17:36:08.715751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.715839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.715857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.972 [2024-07-15 17:36:08.715880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:57.972 [2024-07-15 17:36:08.715892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.716089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.716127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.972 [2024-07-15 17:36:08.716152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:57.972 [2024-07-15 17:36:08.716165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.716346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.716393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.972 [2024-07-15 17:36:08.716409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:29:57.972 [2024-07-15 17:36:08.716429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.726816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.726866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.972 [2024-07-15 17:36:08.726889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.346 ms 00:29:57.972 [2024-07-15 17:36:08.726903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.727131] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:57.972 [2024-07-15 17:36:08.727164] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:57.972 [2024-07-15 17:36:08.727187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.727211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:57.972 [2024-07-15 17:36:08.727225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:57.972 [2024-07-15 17:36:08.727238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.740723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.740857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:57.972 [2024-07-15 17:36:08.740886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.453 ms 00:29:57.972 [2024-07-15 17:36:08.740899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.741108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.741134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:57.972 [2024-07-15 17:36:08.741156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:29:57.972 [2024-07-15 17:36:08.741169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.741294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.741320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:57.972 [2024-07-15 17:36:08.741335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:57.972 [2024-07-15 17:36:08.741348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.741928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.741955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:57.972 [2024-07-15 17:36:08.741972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:29:57.972 [2024-07-15 17:36:08.741991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.742026] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:57.972 [2024-07-15 17:36:08.742051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.742075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:57.972 [2024-07-15 17:36:08.742088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:57.972 [2024-07-15 17:36:08.742099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.752616] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:57.972 [2024-07-15 17:36:08.752905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.752935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:57.972 [2024-07-15 17:36:08.752970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.761 ms 00:29:57.972 [2024-07-15 17:36:08.752983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.755682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.755718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:57.972 [2024-07-15 17:36:08.755734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:29:57.972 [2024-07-15 17:36:08.755746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.972 [2024-07-15 17:36:08.755918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.972 [2024-07-15 17:36:08.755946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:57.972 [2024-07-15 17:36:08.755961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:57.972 [2024-07-15 17:36:08.755974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.973 [2024-07-15 17:36:08.756012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.973 [2024-07-15 17:36:08.756028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:57.973 [2024-07-15 17:36:08.756042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:57.973 [2024-07-15 17:36:08.756054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.973 [2024-07-15 17:36:08.756105] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:57.973 [2024-07-15 17:36:08.756124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.973 [2024-07-15 17:36:08.756161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:57.973 [2024-07-15 17:36:08.756176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:57.973 [2024-07-15 17:36:08.756188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.973 [2024-07-15 17:36:08.761816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.973 [2024-07-15 17:36:08.761870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:57.973 [2024-07-15 17:36:08.761889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:29:57.973 [2024-07-15 17:36:08.761903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.973 [2024-07-15 17:36:08.762010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.973 [2024-07-15 17:36:08.762032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:57.973 [2024-07-15 17:36:08.762054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:57.973 [2024-07-15 17:36:08.762067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.973 [2024-07-15 17:36:08.763833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.434 ms, result 0 00:30:41.491  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (26 MBps) Copying: 124/1024 [MB] (25 MBps) Copying: 149/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (23 MBps) Copying: 196/1024 [MB] (23 MBps) Copying: 220/1024 [MB] (23 MBps) Copying: 243/1024 [MB] (23 MBps) Copying: 268/1024 [MB] (24 MBps) Copying: 291/1024 [MB] (23 MBps) Copying: 317/1024 [MB] (25 MBps) Copying: 341/1024 [MB] (24 MBps) Copying: 366/1024 [MB] (24 MBps) Copying: 389/1024 [MB] (22 MBps) Copying: 411/1024 [MB] (22 MBps) Copying: 434/1024 [MB] (23 MBps) Copying: 458/1024 [MB] (23 MBps) Copying: 482/1024 [MB] (23 MBps) Copying: 506/1024 [MB] (24 MBps) Copying: 529/1024 [MB] (22 MBps) Copying: 553/1024 [MB] (23 MBps) Copying: 576/1024 [MB] (23 MBps) Copying: 600/1024 [MB] (23 MBps) Copying: 625/1024 [MB] (24 MBps) Copying: 650/1024 [MB] (24 MBps) Copying: 675/1024 [MB] (24 MBps) Copying: 699/1024 [MB] (24 MBps) Copying: 725/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 771/1024 [MB] (23 MBps) Copying: 796/1024 [MB] (24 MBps) Copying: 821/1024 [MB] (24 MBps) Copying: 843/1024 [MB] (22 MBps) Copying: 867/1024 [MB] (24 MBps) Copying: 892/1024 [MB] (25 MBps) Copying: 915/1024 [MB] (23 MBps) Copying: 940/1024 [MB] (25 MBps) Copying: 965/1024 [MB] (24 MBps) Copying: 991/1024 [MB] (25 MBps) Copying: 1016/1024 [MB] (25 MBps) Copying: 1048196/1048576 [kB] (7336 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-15 17:36:52.253041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.491 [2024-07-15 17:36:52.253143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:41.491 [2024-07-15 17:36:52.253169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:41.491 [2024-07-15 17:36:52.253182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.491 [2024-07-15 17:36:52.255723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:41.491 [2024-07-15 17:36:52.260114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.491 [2024-07-15 17:36:52.260166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:41.491 [2024-07-15 17:36:52.260197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:30:41.491 [2024-07-15 17:36:52.260210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.491 [2024-07-15 17:36:52.270711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.491 [2024-07-15 17:36:52.270765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:41.491 [2024-07-15 17:36:52.270786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.033 ms 00:30:41.491 [2024-07-15 17:36:52.270800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.491 [2024-07-15 17:36:52.270845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.491 [2024-07-15 17:36:52.270862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:41.491 [2024-07-15 17:36:52.270886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:41.491 [2024-07-15 17:36:52.270900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.491 [2024-07-15 17:36:52.270973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.491 [2024-07-15 17:36:52.270990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:41.491 [2024-07-15 17:36:52.271004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:41.491 [2024-07-15 17:36:52.271016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.491 [2024-07-15 17:36:52.271038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:41.491 [2024-07-15 17:36:52.271057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:30:41.491 [2024-07-15 17:36:52.271073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:41.491 [2024-07-15 17:36:52.271400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.271988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:41.492 [2024-07-15 17:36:52.272430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:41.492 [2024-07-15 17:36:52.272443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1ed47d-2b99-4656-ab93-ac889817c969 00:30:41.492 [2024-07-15 17:36:52.272456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:30:41.492 [2024-07-15 17:36:52.272473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130336 00:30:41.492 [2024-07-15 17:36:52.272485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:30:41.492 [2024-07-15 17:36:52.272498] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:41.492 [2024-07-15 17:36:52.272521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:41.492 [2024-07-15 17:36:52.272534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:41.492 [2024-07-15 17:36:52.272545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:41.492 [2024-07-15 17:36:52.272556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:41.492 [2024-07-15 17:36:52.272567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:41.492 [2024-07-15 17:36:52.272579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.492 [2024-07-15 17:36:52.272593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:41.492 [2024-07-15 17:36:52.272605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:30:41.492 [2024-07-15 17:36:52.272618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.492 [2024-07-15 17:36:52.275660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.492 [2024-07-15 17:36:52.275694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:41.492 [2024-07-15 17:36:52.275719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:30:41.492 [2024-07-15 17:36:52.275743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.492 [2024-07-15 17:36:52.275932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.492 [2024-07-15 17:36:52.275957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:41.492 [2024-07-15 17:36:52.275973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:30:41.492 [2024-07-15 17:36:52.275993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.492 [2024-07-15 17:36:52.286016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.492 [2024-07-15 17:36:52.286083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:41.493 [2024-07-15 17:36:52.286102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.286124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.286239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.286261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:41.493 [2024-07-15 17:36:52.286275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.286303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.286446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.286497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:41.493 [2024-07-15 17:36:52.286513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.286526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.286553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.286574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:41.493 [2024-07-15 17:36:52.286587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.286599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.309672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.309779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:41.493 [2024-07-15 17:36:52.309802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.309816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.325654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.325755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:41.493 [2024-07-15 17:36:52.325802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.325815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.325933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.325964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:41.493 [2024-07-15 17:36:52.325978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.325991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.326082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:41.493 [2024-07-15 17:36:52.326098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.326110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.326240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:41.493 [2024-07-15 17:36:52.326255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.326268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.326339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:41.493 [2024-07-15 17:36:52.326353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.326416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.326510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:41.493 [2024-07-15 17:36:52.326524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.326536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.493 [2024-07-15 17:36:52.326620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:41.493 [2024-07-15 17:36:52.326634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.493 [2024-07-15 17:36:52.326645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.493 [2024-07-15 17:36:52.326847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 76.011 ms, result 0 00:30:42.427 00:30:42.427 00:30:42.427 17:36:53 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:42.427 [2024-07-15 17:36:53.259842] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:30:42.427 [2024-07-15 17:36:53.260039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100215 ] 00:30:42.684 [2024-07-15 17:36:53.416468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:42.684 [2024-07-15 17:36:53.438974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.940 [2024-07-15 17:36:53.579900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.941 [2024-07-15 17:36:53.744949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:42.941 [2024-07-15 17:36:53.745060] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:43.200 [2024-07-15 17:36:53.911526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.911607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:43.200 [2024-07-15 17:36:53.911627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:43.200 [2024-07-15 17:36:53.911638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.911731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.911753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:43.200 [2024-07-15 17:36:53.911770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:43.200 [2024-07-15 17:36:53.911791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.911823] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:43.200 [2024-07-15 17:36:53.912117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:43.200 [2024-07-15 17:36:53.912165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.912178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:43.200 [2024-07-15 17:36:53.912189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:30:43.200 [2024-07-15 17:36:53.912200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.912820] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:43.200 [2024-07-15 17:36:53.912865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.912886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:43.200 [2024-07-15 17:36:53.912911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:43.200 [2024-07-15 17:36:53.912923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.912994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.913019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:43.200 [2024-07-15 17:36:53.913032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:43.200 [2024-07-15 17:36:53.913059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.913528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.913563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:43.200 [2024-07-15 17:36:53.913582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:30:43.200 [2024-07-15 17:36:53.913604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.913703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.913723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:43.200 [2024-07-15 17:36:53.913751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:30:43.200 [2024-07-15 17:36:53.913762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.913833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.913848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:43.200 [2024-07-15 17:36:53.913871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:43.200 [2024-07-15 17:36:53.913887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.913923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:43.200 [2024-07-15 17:36:53.916991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.917055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:43.200 [2024-07-15 17:36:53.917070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:30:43.200 [2024-07-15 17:36:53.917081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.917141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.917157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:43.200 [2024-07-15 17:36:53.917183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:43.200 [2024-07-15 17:36:53.917194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.917264] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:43.200 [2024-07-15 17:36:53.917432] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:43.200 [2024-07-15 17:36:53.917485] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:43.200 [2024-07-15 17:36:53.917507] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:43.200 [2024-07-15 17:36:53.917606] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:43.200 [2024-07-15 17:36:53.917634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:43.200 [2024-07-15 17:36:53.917650] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:43.200 [2024-07-15 17:36:53.917665] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:43.200 [2024-07-15 17:36:53.917678] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:43.200 [2024-07-15 17:36:53.917702] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:43.200 [2024-07-15 17:36:53.917767] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:43.200 [2024-07-15 17:36:53.917784] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:43.200 [2024-07-15 17:36:53.917795] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:43.200 [2024-07-15 17:36:53.917806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.917817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:43.200 [2024-07-15 17:36:53.917829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:30:43.200 [2024-07-15 17:36:53.917844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.917930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.200 [2024-07-15 17:36:53.917947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:43.200 [2024-07-15 17:36:53.917958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:43.200 [2024-07-15 17:36:53.917981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.200 [2024-07-15 17:36:53.918101] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:43.200 [2024-07-15 17:36:53.918125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:43.200 [2024-07-15 17:36:53.918137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:43.200 [2024-07-15 17:36:53.918149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.200 [2024-07-15 17:36:53.918167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:43.200 [2024-07-15 17:36:53.918182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:43.200 [2024-07-15 17:36:53.918193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:43.200 [2024-07-15 17:36:53.918202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:43.200 [2024-07-15 17:36:53.918212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:43.201 [2024-07-15 17:36:53.918231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:43.201 [2024-07-15 17:36:53.918240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:43.201 [2024-07-15 17:36:53.918250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:43.201 [2024-07-15 17:36:53.918259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:43.201 [2024-07-15 17:36:53.918270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:43.201 [2024-07-15 17:36:53.918279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:43.201 [2024-07-15 17:36:53.918312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:43.201 [2024-07-15 17:36:53.918342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:43.201 [2024-07-15 17:36:53.918392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:43.201 [2024-07-15 17:36:53.918421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:43.201 [2024-07-15 17:36:53.918450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:43.201 [2024-07-15 17:36:53.918478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:43.201 [2024-07-15 17:36:53.918497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:43.201 [2024-07-15 17:36:53.918506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:43.201 [2024-07-15 17:36:53.918532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:43.201 [2024-07-15 17:36:53.918545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:43.201 [2024-07-15 17:36:53.918556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:43.201 [2024-07-15 17:36:53.918566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:43.201 [2024-07-15 17:36:53.918586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:43.201 [2024-07-15 17:36:53.918597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918606] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:43.201 [2024-07-15 17:36:53.918617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:43.201 [2024-07-15 17:36:53.918627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:43.201 [2024-07-15 17:36:53.918648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:43.201 [2024-07-15 17:36:53.918658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:43.201 [2024-07-15 17:36:53.918668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:43.201 [2024-07-15 17:36:53.918678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:43.201 [2024-07-15 17:36:53.918687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:43.201 [2024-07-15 17:36:53.918696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:43.201 [2024-07-15 17:36:53.918714] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:43.201 [2024-07-15 17:36:53.918738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:43.201 [2024-07-15 17:36:53.918763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:43.201 [2024-07-15 17:36:53.918773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:43.201 [2024-07-15 17:36:53.918785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:43.201 [2024-07-15 17:36:53.918795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:43.201 [2024-07-15 17:36:53.918806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:43.201 [2024-07-15 17:36:53.918816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:43.201 [2024-07-15 17:36:53.918827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:43.201 [2024-07-15 17:36:53.918837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:43.201 [2024-07-15 17:36:53.918848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:43.201 [2024-07-15 17:36:53.918918] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:43.201 [2024-07-15 17:36:53.918930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:43.201 [2024-07-15 17:36:53.918953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:43.201 [2024-07-15 17:36:53.918964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:43.201 [2024-07-15 17:36:53.918974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:43.201 [2024-07-15 17:36:53.918985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.919022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:43.201 [2024-07-15 17:36:53.919034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:30:43.201 [2024-07-15 17:36:53.919049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.944489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.944570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:43.201 [2024-07-15 17:36:53.944614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.381 ms 00:30:43.201 [2024-07-15 17:36:53.944630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.944821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.944864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:43.201 [2024-07-15 17:36:53.944884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:43.201 [2024-07-15 17:36:53.944899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.963900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.963979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:43.201 [2024-07-15 17:36:53.963999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:30:43.201 [2024-07-15 17:36:53.964039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.964146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.964163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:43.201 [2024-07-15 17:36:53.964207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:43.201 [2024-07-15 17:36:53.964226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.964395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.964431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:43.201 [2024-07-15 17:36:53.964453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:43.201 [2024-07-15 17:36:53.964480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.964682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.964726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:43.201 [2024-07-15 17:36:53.964739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:43.201 [2024-07-15 17:36:53.964750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.976587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.976635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:43.201 [2024-07-15 17:36:53.976658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.802 ms 00:30:43.201 [2024-07-15 17:36:53.976672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.976923] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:43.201 [2024-07-15 17:36:53.976971] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:43.201 [2024-07-15 17:36:53.976988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.201 [2024-07-15 17:36:53.977021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:43.201 [2024-07-15 17:36:53.977049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:30:43.201 [2024-07-15 17:36:53.977062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.201 [2024-07-15 17:36:53.991776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:53.991854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:43.202 [2024-07-15 17:36:53.991871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.666 ms 00:30:43.202 [2024-07-15 17:36:53.991884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:53.992052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:53.992235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:43.202 [2024-07-15 17:36:53.992293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:30:43.202 [2024-07-15 17:36:53.992309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:53.992383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:53.992414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:43.202 [2024-07-15 17:36:53.992426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:43.202 [2024-07-15 17:36:53.992437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:53.992823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:53.992864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:43.202 [2024-07-15 17:36:53.992878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:30:43.202 [2024-07-15 17:36:53.992906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:53.992935] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:43.202 [2024-07-15 17:36:53.992951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:53.992967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:43.202 [2024-07-15 17:36:53.992980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:43.202 [2024-07-15 17:36:53.993007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.005600] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:43.202 [2024-07-15 17:36:54.005970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.006025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:43.202 [2024-07-15 17:36:54.006044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.904 ms 00:30:43.202 [2024-07-15 17:36:54.006056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.009261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.009292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:43.202 [2024-07-15 17:36:54.009307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:30:43.202 [2024-07-15 17:36:54.009347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.009522] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:43.202 [2024-07-15 17:36:54.010319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.010350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:43.202 [2024-07-15 17:36:54.010380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:30:43.202 [2024-07-15 17:36:54.010409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.010453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.010469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:43.202 [2024-07-15 17:36:54.010482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:43.202 [2024-07-15 17:36:54.010509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.010573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:43.202 [2024-07-15 17:36:54.010614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.010660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:43.202 [2024-07-15 17:36:54.010673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:43.202 [2024-07-15 17:36:54.010702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.016495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.016537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:43.202 [2024-07-15 17:36:54.016554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.758 ms 00:30:43.202 [2024-07-15 17:36:54.016579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.016779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.202 [2024-07-15 17:36:54.016833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:43.202 [2024-07-15 17:36:54.016853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:43.202 [2024-07-15 17:36:54.016885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.202 [2024-07-15 17:36:54.028468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.405 ms, result 0 00:31:24.288  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (23 MBps) Copying: 74/1024 [MB] (25 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 123/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (25 MBps) Copying: 174/1024 [MB] (26 MBps) Copying: 201/1024 [MB] (26 MBps) Copying: 226/1024 [MB] (25 MBps) Copying: 253/1024 [MB] (26 MBps) Copying: 278/1024 [MB] (25 MBps) Copying: 305/1024 [MB] (26 MBps) Copying: 331/1024 [MB] (26 MBps) Copying: 357/1024 [MB] (26 MBps) Copying: 383/1024 [MB] (25 MBps) Copying: 409/1024 [MB] (26 MBps) Copying: 433/1024 [MB] (23 MBps) Copying: 457/1024 [MB] (23 MBps) Copying: 482/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (27 MBps) Copying: 534/1024 [MB] (24 MBps) Copying: 560/1024 [MB] (25 MBps) Copying: 585/1024 [MB] (24 MBps) Copying: 610/1024 [MB] (25 MBps) Copying: 634/1024 [MB] (24 MBps) Copying: 659/1024 [MB] (24 MBps) Copying: 684/1024 [MB] (25 MBps) Copying: 708/1024 [MB] (23 MBps) Copying: 734/1024 [MB] (26 MBps) Copying: 760/1024 [MB] (25 MBps) Copying: 786/1024 [MB] (26 MBps) Copying: 811/1024 [MB] (24 MBps) Copying: 835/1024 [MB] (24 MBps) Copying: 859/1024 [MB] (24 MBps) Copying: 883/1024 [MB] (23 MBps) Copying: 908/1024 [MB] (24 MBps) Copying: 935/1024 [MB] (26 MBps) Copying: 961/1024 [MB] (26 MBps) Copying: 987/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 17:37:35.110429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.288 [2024-07-15 17:37:35.110565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:24.288 [2024-07-15 17:37:35.110600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:24.288 [2024-07-15 17:37:35.110640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.288 [2024-07-15 17:37:35.110690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:24.288 [2024-07-15 17:37:35.112181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.288 [2024-07-15 17:37:35.112228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:24.288 [2024-07-15 17:37:35.112253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:31:24.288 [2024-07-15 17:37:35.112274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.288 [2024-07-15 17:37:35.112670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.288 [2024-07-15 17:37:35.112715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:24.288 [2024-07-15 17:37:35.112950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:31:24.288 [2024-07-15 17:37:35.112987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.288 [2024-07-15 17:37:35.113060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.288 [2024-07-15 17:37:35.113084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:24.288 [2024-07-15 17:37:35.113105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:24.288 [2024-07-15 17:37:35.113132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.288 [2024-07-15 17:37:35.113228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.288 [2024-07-15 17:37:35.113264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:24.288 [2024-07-15 17:37:35.113285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:24.288 [2024-07-15 17:37:35.113304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.288 [2024-07-15 17:37:35.113336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:24.288 [2024-07-15 17:37:35.113413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:31:24.288 [2024-07-15 17:37:35.113440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.113988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:24.288 [2024-07-15 17:37:35.114174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.114982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:24.289 [2024-07-15 17:37:35.115604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:24.289 [2024-07-15 17:37:35.115624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a1ed47d-2b99-4656-ab93-ac889817c969 00:31:24.289 [2024-07-15 17:37:35.115645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:31:24.289 [2024-07-15 17:37:35.115674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:31:24.289 [2024-07-15 17:37:35.115708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:31:24.289 [2024-07-15 17:37:35.115741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:31:24.289 [2024-07-15 17:37:35.115760] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:24.289 [2024-07-15 17:37:35.115781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:24.289 [2024-07-15 17:37:35.115800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:24.289 [2024-07-15 17:37:35.115818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:24.289 [2024-07-15 17:37:35.115838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:24.289 [2024-07-15 17:37:35.115857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.289 [2024-07-15 17:37:35.115877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:24.289 [2024-07-15 17:37:35.115898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:31:24.289 [2024-07-15 17:37:35.115917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.120618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.290 [2024-07-15 17:37:35.120666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:24.290 [2024-07-15 17:37:35.120692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:31:24.290 [2024-07-15 17:37:35.120707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.120912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.290 [2024-07-15 17:37:35.120958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:24.290 [2024-07-15 17:37:35.120974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:31:24.290 [2024-07-15 17:37:35.120989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.133077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.290 [2024-07-15 17:37:35.133159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:24.290 [2024-07-15 17:37:35.133180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.290 [2024-07-15 17:37:35.133200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.133309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.290 [2024-07-15 17:37:35.133330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:24.290 [2024-07-15 17:37:35.133344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.290 [2024-07-15 17:37:35.133368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.133500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.290 [2024-07-15 17:37:35.133527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:24.290 [2024-07-15 17:37:35.133541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.290 [2024-07-15 17:37:35.133554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.290 [2024-07-15 17:37:35.133577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.290 [2024-07-15 17:37:35.133591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:24.290 [2024-07-15 17:37:35.133604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.290 [2024-07-15 17:37:35.133615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.160186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.160274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:24.548 [2024-07-15 17:37:35.160295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.160330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.175667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.175750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:24.548 [2024-07-15 17:37:35.175771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.175783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.175900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.175917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:24.548 [2024-07-15 17:37:35.175938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.175949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.176025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:24.548 [2024-07-15 17:37:35.176037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.176049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.176177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:24.548 [2024-07-15 17:37:35.176192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.176204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.176262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:24.548 [2024-07-15 17:37:35.176275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.176297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.176391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:24.548 [2024-07-15 17:37:35.176412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.176424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.548 [2024-07-15 17:37:35.176516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:24.548 [2024-07-15 17:37:35.176529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.548 [2024-07-15 17:37:35.176541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.548 [2024-07-15 17:37:35.176725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 66.296 ms, result 0 00:31:24.807 00:31:24.807 00:31:24.807 17:37:35 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:27.335 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:27.335 Process with pid 98761 is not found 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 98761 00:31:27.335 17:37:37 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 98761 ']' 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 98761 00:31:27.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98761) - No such process 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 98761 is not found' 00:31:27.336 Remove shared memory files 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_band_md /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_l2p_l1 /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_l2p_l2 /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_l2p_l2_ctx /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_nvc_md /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_p2l_pool /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_sb /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_sb_shm /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_trim_bitmap /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_trim_log /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_trim_md /dev/hugepages/ftl_4a1ed47d-2b99-4656-ab93-ac889817c969_vmap 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:27.336 00:31:27.336 real 3m12.440s 00:31:27.336 user 2m57.225s 00:31:27.336 sys 0m17.585s 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:27.336 17:37:37 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:27.336 ************************************ 00:31:27.336 END TEST ftl_restore_fast 00:31:27.336 ************************************ 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@1142 -- # return 0 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@14 -- # killprocess 91283 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@948 -- # '[' -z 91283 ']' 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@952 -- # kill -0 91283 00:31:27.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (91283) - No such process 00:31:27.336 Process with pid 91283 is not found 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 91283 is not found' 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=100681 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@20 -- # waitforlisten 100681 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@829 -- # '[' -z 100681 ']' 00:31:27.336 17:37:37 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.336 17:37:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:27.336 [2024-07-15 17:37:38.042283] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:27.336 [2024-07-15 17:37:38.042542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100681 ] 00:31:27.593 [2024-07-15 17:37:38.195658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:27.593 [2024-07-15 17:37:38.210563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.593 [2024-07-15 17:37:38.341109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.163 17:37:38 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.163 17:37:38 ftl -- common/autotest_common.sh@862 -- # return 0 00:31:28.163 17:37:38 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:28.421 nvme0n1 00:31:28.680 17:37:39 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:28.680 17:37:39 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.680 17:37:39 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:28.965 17:37:39 ftl -- ftl/common.sh@28 -- # stores=33d78f42-51ff-4500-b168-bc06c12da903 00:31:28.965 17:37:39 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:28.965 17:37:39 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33d78f42-51ff-4500-b168-bc06c12da903 00:31:29.222 17:37:39 ftl -- ftl/ftl.sh@23 -- # killprocess 100681 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@948 -- # '[' -z 100681 ']' 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@952 -- # kill -0 100681 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@953 -- # uname 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100681 00:31:29.222 killing process with pid 100681 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:29.222 17:37:39 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100681' 00:31:29.223 17:37:39 ftl -- common/autotest_common.sh@967 -- # kill 100681 00:31:29.223 17:37:39 ftl -- common/autotest_common.sh@972 -- # wait 100681 00:31:29.789 17:37:40 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:30.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:30.307 Waiting for block devices as requested 00:31:30.307 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.307 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.307 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:30.565 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:35.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:35.833 Remove shared memory files 00:31:35.833 17:37:46 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:35.833 17:37:46 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:35.833 17:37:46 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:35.833 17:37:46 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:35.833 17:37:46 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:35.833 17:37:46 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:35.833 17:37:46 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:35.833 ************************************ 00:31:35.833 END TEST ftl 00:31:35.833 ************************************ 00:31:35.833 00:31:35.833 real 14m23.086s 00:31:35.833 user 16m41.072s 00:31:35.833 sys 1m55.251s 00:31:35.833 17:37:46 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:35.833 17:37:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:35.833 17:37:46 -- common/autotest_common.sh@1142 -- # return 0 00:31:35.833 17:37:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:35.833 17:37:46 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:35.833 17:37:46 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:35.833 17:37:46 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:35.833 17:37:46 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:35.833 17:37:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:35.833 17:37:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:35.833 17:37:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:35.833 17:37:46 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:35.833 17:37:46 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:35.833 17:37:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:35.833 17:37:46 -- common/autotest_common.sh@10 -- # set +x 00:31:35.833 17:37:46 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:35.833 17:37:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:35.833 17:37:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:35.833 17:37:46 -- common/autotest_common.sh@10 -- # set +x 00:31:37.208 INFO: APP EXITING 00:31:37.208 INFO: killing all VMs 00:31:37.208 INFO: killing vhost app 00:31:37.208 INFO: EXIT DONE 00:31:37.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:37.724 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:37.724 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:37.724 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:37.724 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:38.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:38.550 Cleaning 00:31:38.550 Removing: /var/run/dpdk/spdk0/config 00:31:38.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:38.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:38.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:38.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:38.551 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:38.551 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:38.551 Removing: /var/run/dpdk/spdk0 00:31:38.551 Removing: /var/run/dpdk/spdk_pid100215 00:31:38.551 Removing: /var/run/dpdk/spdk_pid100681 00:31:38.551 Removing: /var/run/dpdk/spdk_pid75466 00:31:38.551 Removing: /var/run/dpdk/spdk_pid75629 00:31:38.551 Removing: /var/run/dpdk/spdk_pid75828 00:31:38.551 Removing: /var/run/dpdk/spdk_pid75917 00:31:38.551 Removing: /var/run/dpdk/spdk_pid75945 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76059 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76077 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76236 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76301 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76378 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76470 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76548 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76582 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76625 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76682 00:31:38.551 Removing: /var/run/dpdk/spdk_pid76788 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77222 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77275 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77327 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77343 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77412 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77428 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77497 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77513 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77566 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77584 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77626 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77644 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77774 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77805 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77886 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77940 00:31:38.809 Removing: /var/run/dpdk/spdk_pid77965 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78032 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78068 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78103 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78139 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78180 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78215 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78251 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78286 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78322 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78363 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78393 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78434 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78464 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78505 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78535 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78576 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78612 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78650 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78694 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78724 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78766 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78837 00:31:38.809 Removing: /var/run/dpdk/spdk_pid78931 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79082 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79155 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79186 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79642 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79735 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79839 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79881 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79912 00:31:38.809 Removing: /var/run/dpdk/spdk_pid79988 00:31:38.809 Removing: /var/run/dpdk/spdk_pid80595 00:31:38.809 Removing: /var/run/dpdk/spdk_pid80631 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81122 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81209 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81318 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81360 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81391 00:31:38.809 Removing: /var/run/dpdk/spdk_pid81417 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83259 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83374 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83389 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83401 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83442 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83446 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83458 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83503 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83507 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83519 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83569 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83573 00:31:38.809 Removing: /var/run/dpdk/spdk_pid83587 00:31:38.809 Removing: /var/run/dpdk/spdk_pid84942 00:31:38.809 Removing: /var/run/dpdk/spdk_pid85026 00:31:38.809 Removing: /var/run/dpdk/spdk_pid86407 00:31:38.809 Removing: /var/run/dpdk/spdk_pid87738 00:31:38.809 Removing: /var/run/dpdk/spdk_pid87831 00:31:38.809 Removing: /var/run/dpdk/spdk_pid87924 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88010 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88126 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88204 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88334 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88675 00:31:38.809 Removing: /var/run/dpdk/spdk_pid88706 00:31:38.809 Removing: /var/run/dpdk/spdk_pid89177 00:31:38.809 Removing: /var/run/dpdk/spdk_pid89365 00:31:38.809 Removing: /var/run/dpdk/spdk_pid89460 00:31:39.068 Removing: /var/run/dpdk/spdk_pid89577 00:31:39.068 Removing: /var/run/dpdk/spdk_pid89614 00:31:39.068 Removing: /var/run/dpdk/spdk_pid89640 00:31:39.068 Removing: /var/run/dpdk/spdk_pid89934 00:31:39.068 Removing: /var/run/dpdk/spdk_pid89971 00:31:39.068 Removing: /var/run/dpdk/spdk_pid90023 00:31:39.068 Removing: /var/run/dpdk/spdk_pid90365 00:31:39.068 Removing: /var/run/dpdk/spdk_pid90510 00:31:39.068 Removing: /var/run/dpdk/spdk_pid91283 00:31:39.068 Removing: /var/run/dpdk/spdk_pid91396 00:31:39.068 Removing: /var/run/dpdk/spdk_pid91567 00:31:39.068 Removing: /var/run/dpdk/spdk_pid91659 00:31:39.068 Removing: /var/run/dpdk/spdk_pid91996 00:31:39.068 Removing: /var/run/dpdk/spdk_pid92260 00:31:39.068 Removing: /var/run/dpdk/spdk_pid92616 00:31:39.068 Removing: /var/run/dpdk/spdk_pid92804 00:31:39.068 Removing: /var/run/dpdk/spdk_pid92929 00:31:39.068 Removing: /var/run/dpdk/spdk_pid92965 00:31:39.069 Removing: /var/run/dpdk/spdk_pid93098 00:31:39.069 Removing: /var/run/dpdk/spdk_pid93112 00:31:39.069 Removing: /var/run/dpdk/spdk_pid93159 00:31:39.069 Removing: /var/run/dpdk/spdk_pid93344 00:31:39.069 Removing: /var/run/dpdk/spdk_pid93579 00:31:39.069 Removing: /var/run/dpdk/spdk_pid94009 00:31:39.069 Removing: /var/run/dpdk/spdk_pid94463 00:31:39.069 Removing: /var/run/dpdk/spdk_pid94912 00:31:39.069 Removing: /var/run/dpdk/spdk_pid95433 00:31:39.069 Removing: /var/run/dpdk/spdk_pid95571 00:31:39.069 Removing: /var/run/dpdk/spdk_pid95675 00:31:39.069 Removing: /var/run/dpdk/spdk_pid96312 00:31:39.069 Removing: /var/run/dpdk/spdk_pid96387 00:31:39.069 Removing: /var/run/dpdk/spdk_pid96840 00:31:39.069 Removing: /var/run/dpdk/spdk_pid97257 00:31:39.069 Removing: /var/run/dpdk/spdk_pid97762 00:31:39.069 Removing: /var/run/dpdk/spdk_pid97880 00:31:39.069 Removing: /var/run/dpdk/spdk_pid97922 00:31:39.069 Removing: /var/run/dpdk/spdk_pid97986 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98042 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98102 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98290 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98360 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98432 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98510 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98539 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98612 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98761 00:31:39.069 Removing: /var/run/dpdk/spdk_pid98978 00:31:39.069 Removing: /var/run/dpdk/spdk_pid99360 00:31:39.069 Removing: /var/run/dpdk/spdk_pid99777 00:31:39.069 Clean 00:31:39.069 17:37:49 -- common/autotest_common.sh@1451 -- # return 0 00:31:39.069 17:37:49 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:39.069 17:37:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:39.069 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:31:39.069 17:37:49 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:39.069 17:37:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:39.069 17:37:49 -- common/autotest_common.sh@10 -- # set +x 00:31:39.327 17:37:49 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:39.327 17:37:49 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:39.327 17:37:49 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:39.327 17:37:49 -- spdk/autotest.sh@391 -- # hash lcov 00:31:39.327 17:37:49 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:39.327 17:37:49 -- spdk/autotest.sh@393 -- # hostname 00:31:39.327 17:37:49 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:39.585 geninfo: WARNING: invalid characters removed from testname! 00:32:11.698 17:38:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.698 17:38:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.597 17:38:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:16.909 17:38:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:19.440 17:38:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:22.783 17:38:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:25.312 17:38:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:25.312 17:38:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:25.312 17:38:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:25.312 17:38:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.312 17:38:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.312 17:38:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.312 17:38:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.312 17:38:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.312 17:38:35 -- paths/export.sh@5 -- $ export PATH 00:32:25.312 17:38:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.312 17:38:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:25.312 17:38:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:25.312 17:38:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721065115.XXXXXX 00:32:25.312 17:38:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721065115.CcIIzg 00:32:25.312 17:38:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:25.312 17:38:35 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:32:25.312 17:38:35 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:25.312 17:38:35 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:25.312 17:38:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:25.312 17:38:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:25.312 17:38:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:25.312 17:38:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:25.312 17:38:35 -- common/autotest_common.sh@10 -- $ set +x 00:32:25.312 17:38:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:32:25.312 17:38:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:25.312 17:38:35 -- pm/common@17 -- $ local monitor 00:32:25.312 17:38:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:25.312 17:38:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:25.312 17:38:35 -- pm/common@25 -- $ sleep 1 00:32:25.312 17:38:35 -- pm/common@21 -- $ date +%s 00:32:25.312 17:38:35 -- pm/common@21 -- $ date +%s 00:32:25.312 17:38:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721065115 00:32:25.312 17:38:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721065115 00:32:25.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721065115_collect-vmstat.pm.log 00:32:25.312 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721065115_collect-cpu-load.pm.log 00:32:26.245 17:38:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:26.245 17:38:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:26.245 17:38:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:26.245 17:38:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:26.245 17:38:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:26.245 17:38:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:26.245 17:38:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:26.245 17:38:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:26.245 17:38:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:26.245 17:38:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:26.245 17:38:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:26.245 17:38:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:26.245 17:38:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:26.245 17:38:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:26.245 17:38:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.245 17:38:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:26.245 17:38:36 -- pm/common@44 -- $ pid=102372 00:32:26.245 17:38:36 -- pm/common@50 -- $ kill -TERM 102372 00:32:26.245 17:38:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:26.245 17:38:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:26.245 17:38:36 -- pm/common@44 -- $ pid=102374 00:32:26.245 17:38:36 -- pm/common@50 -- $ kill -TERM 102374 00:32:26.245 + [[ -n 6041 ]] 00:32:26.245 + sudo kill 6041 00:32:26.254 [Pipeline] } 00:32:26.272 [Pipeline] // timeout 00:32:26.279 [Pipeline] } 00:32:26.299 [Pipeline] // stage 00:32:26.306 [Pipeline] } 00:32:26.324 [Pipeline] // catchError 00:32:26.333 [Pipeline] stage 00:32:26.335 [Pipeline] { (Stop VM) 00:32:26.350 [Pipeline] sh 00:32:26.630 + vagrant halt 00:32:30.865 ==> default: Halting domain... 00:32:37.455 [Pipeline] sh 00:32:37.750 + vagrant destroy -f 00:32:41.954 ==> default: Removing domain... 00:32:42.896 [Pipeline] sh 00:32:43.171 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:43.180 [Pipeline] } 00:32:43.197 [Pipeline] // stage 00:32:43.202 [Pipeline] } 00:32:43.219 [Pipeline] // dir 00:32:43.224 [Pipeline] } 00:32:43.240 [Pipeline] // wrap 00:32:43.245 [Pipeline] } 00:32:43.258 [Pipeline] // catchError 00:32:43.267 [Pipeline] stage 00:32:43.269 [Pipeline] { (Epilogue) 00:32:43.282 [Pipeline] sh 00:32:43.582 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:51.703 [Pipeline] catchError 00:32:51.705 [Pipeline] { 00:32:51.719 [Pipeline] sh 00:32:51.996 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:51.997 Artifacts sizes are good 00:32:52.005 [Pipeline] } 00:32:52.024 [Pipeline] // catchError 00:32:52.035 [Pipeline] archiveArtifacts 00:32:52.042 Archiving artifacts 00:32:52.181 [Pipeline] cleanWs 00:32:52.203 [WS-CLEANUP] Deleting project workspace... 00:32:52.203 [WS-CLEANUP] Deferred wipeout is used... 00:32:52.208 [WS-CLEANUP] done 00:32:52.210 [Pipeline] } 00:32:52.228 [Pipeline] // stage 00:32:52.234 [Pipeline] } 00:32:52.247 [Pipeline] // node 00:32:52.251 [Pipeline] End of Pipeline 00:32:52.286 Finished: SUCCESS